Youtube comments of (@ContinuousDelivery).

  1. 571
  2. 525
  3. 423
  4. 263
  5. 223
  6. 117
  7. 102
  8. 101
  9. 76
  10. 75
  11. 67
  12. 63
  13. 49
  14. 48
  15. 45
  16. 44
  17. 39
  18. ​ @benbaert2166  Sociology is difficult, and this kind of research is sociology. One of the problems is getting enough data for statistics to start working. I agree with your criticisms of the research, but who is going to pay for research on pair programming - no one has a real commercial interest, there was a flurry of research in academic circles (hence the use of students) at the start of the century when pair programming was first introduced. I report what data I can find, but I also have my personal experience, which being interested in science, I know to be a bad indicator, nevertheless in my small sample, which is probably bigger than yours, because I am older, and so have had time to work in more different places, but still not statistically significant across an industry, in all the best places I have seen, some form of frequent, regular, intensive collaboration was a factor. In nearly all places where I saw, often good, experienced, developers working alone on things their output was worse. This is not always true, I have been lucky and worked with some great programmers, who always did good work, but even the best, did better work when working closely with other people. The idea of the programmer as a lone, socially-isolated genus is a myth, and a damaging one in my opinion. These views are reinforced by what small, inadequate, evidence that there is, but SW dev is simply not studied in enough detail to rely only on data, we do have to guess and make a choice. I know for a fact that you don't have data that disproves pair programming for example, because there is none. I bet you don't have data for why you use the language or tools that you use, because there isn't any that says one is better than another. For me that means we must each experiment with ideas like these. We can't trust our own "likes" and "intuition", we need to figure out what works, in our own specific, small, context. People like me can give you my advice, and like all advice, it is for you to decide how to use it. I try to only offer advice when I have tried both ways. I have done a lot of programming with pairing and a lot more without. For me and the teams I worked on, pairing always worked better, significantly better, in the teams that tried it.
    38
  19. ​ @visiongt3944  My main advice is to write as much code as you can. Find projects that interest or excite you, play with different kinds of things, maths problems, graphics, little tools to help you do something, almost anything. Next I would advise that you learn TDD, watch a few of my videos on that topic, I hope that they may help, but also pick a 'coding kata' to practice. Take a look here for some inspiration https://cyber-dojo.org/ On tech-stacks, I don't think that should be very important, however, there are lots of orgs that aren't very good at interviewing and so all they go on is a check-list of tech. That is a terrible way to interview anyone, but particularly bad for people just starting out. If I were in your place now, knowing what I know, I would treat that as a bad sign for that employer, but, being pracgmatic, getting your first job is hard and so you may have to play that game. My advice though, don't treat them like a collection, pick tools and tech that you like working with and get good with that. You can look at what sorts of tech are popular in places where you would like to work, or just pick the most popular things generally. Python and Java and Javascript are popular, but it does depend on what you want to do. As for frameworks or platforms, I wouldn't worry too much about that, most orgs will use a few technologies and it will be hit or miss if you have the ones that they use. Also they are ephemeral, they will change all the time through your career. The skill that you will/should develop is the skill to learn new ones. You get that from doing real work and thinking about what is happening in terms of design, not just the sytax of the use of framework A over framework B. Take a look at this for thinking about languages, https://dzone.com/articles/top-10-most-promising-programming-languages-for-20 but first get comfortable with one (oh and with TDD 😁 😎).
    37
  20. 37
  21. 36
  22. 34
  23. 33
  24. 32
  25. 31
  26. 30
  27. 28
  28. 26
  29. 25
  30. 25
  31. 25
  32. 25
  33. 24
  34. 24
  35. 24
  36. 23
  37. 23
  38. 23
  39. 22
  40. 21
  41. 20
  42. 20
  43. 19
  44. 19
  45. 19
  46. 18
  47. 18
  48.  @CosineKitty  I am kind of on the fence when it comes to #NoEstimates. I think that at heart the idea is right, but that pragmatically estimates are probably going to stick around, rather like astrology - no basis in reason, but people like the habit. The problem with estimates is that they are always wrong, and they are usually treated as though they are firm commitments. This seems to be based on the idea that in the absence of estimates, dev teams would slack-off and not work so hard. I see no evidence of this at all. Let's just image, for a moment, a perfect dev team. They are working at the limit, producing great, high-quality software as fast as anyone could. What would estimates do for this team and this business? It could only slow them down, because now we are asking them to do stuff that doesn't directly contribute to the creation of great software, in addition to what they were doing before. The problem is that orgs like the illusion of predictability. This is completely unreal, and certainly in software, but also in commercial performance too, there is no predictability. This is doubly true when attempting something new, for the first time. In software we are always producing something new for the first time, otherwise why would we bother, because we could copy it for free. So I am philosophically in the #NoEstimates camp, but in reality there are times when you can't avoid them because orgs like estimates in the same way that kids like Xmas and the tooth fairy. Under these circumstances, I won't try and just make stuff up, but I will try to minimise the work invested in the estimation.  I did a video on this topic a while ago: https://youtu.be/v21jg8wb1eU
    18
  49. 18
  50. 18
  51. 17
  52. 17
  53. 17
  54. 17
  55. 17
  56. 17
  57. 16
  58. 16
  59. 16
  60. 16
  61. 16
  62. 16
  63. 16
  64. 16
  65. 16
  66. 15
  67. 15
  68. 15
  69. 15
  70. 15
  71. 15
  72. 15
  73. 15
  74. 14
  75. 14
  76. Well, first, thanks for the thoughtful response. I don't claim that what you are describing are not useful ideas, they are. Much better than what went before. However, in this one narrow context I can be more definitive, maybe even dogmatic, than I usually am, because I invented the concept that we are talking about. The term "Deployment Pipeline" is one that I created for the Continuous Delivery book. So I can be sure of the definition for this one thing. What I describe here is exactly what I meant, and have been applying for the last 20 years or so, on real projects. One area where we may be getting confused on the approach, I am NOT saying that the pipeline has to automatically release into production. The decision to release can be automated (Continuous Deployment) or manual (push-button). My point is that the "Deployment Pipeline" is definitive, in terms of what constitutes releasability. If the pipeline says everything is good, there is no more work to do apart from "pushing the button to release" and that is a choice. So, where we differ, perhaps, the "Release Pipeline" and the "Deployment Pipeline" are not different, but "Release" and "Deployment" are. I can choose to deploy and change into production that passes the evaluation in the "Deployment Pipeline", that doesn't necessarily mean that I have released a feature, the change may be part of a feature that is not yet ready for release (I talk about that idea in more detail here: https://youtu.be/v4Ijkq6Myfc). My point here, is that we can use the idea of a "Deployment Pipeline" as THE organising principle for our development approach. It is definitive in terms of what it takes to get to a point where we can safely, and with confidence, deploy a change (hence the name) and that idea is a lot more valuable than breaking up the process into a series of different pipelines. I hope that, at least, my thinking is clearer?
    14
  77. 14
  78. 14
  79. 14
  80. 14
  81. 14
  82. 14
  83. 14
  84. 13
  85. 13
  86. 13
  87. 13
  88. 13
  89. 13
  90. 13
  91. 13
  92. 13
  93. 13
  94. 13
  95. 13
  96. 13
  97. 12
  98. 12
  99. 12
  100. 12
  101. 12
  102. Emily thanks for the great feedback. The point of my video series was to try and demonstrate the uncertainty that is inherent in this kind of exercise. So I didn't rehearse it or plan it. I am not sure that I was clear enough on the trailing comma thing. I should have been more explicit that at that moment I was experimenting to see how the code worked before I decided how to proceed. This would have been clearer, perhaps, if I hadn't take the short-cut of not really committing to save time. There is no way that I would have committed the change with the line of code commented out! Yes, I hummed and harr'ed about showing the creation of the Approval test - I think that I will do another separate video on that sometime. I basically took the snippet of sample XML that was in the comments in the code, wrote a test based on that as the input, then measured coverage and added more XML that I guessed would increase the coverage until it did. Whether or not the code counts as "Testable" based only on the Approval tests is debatable I guess. Strictly I guess you are correct, but I suppose that I fall into the trap of the overloaded nature of the word "Test" in the context of TDD. What I really mean by "Testability" is "Designable Through Executable Specifications". Approval tests don't do that, which is why I reacted against them a bit when I first heard of them from you. I know see their value, but it is not the same thing that you get from TDD. I suppose that by sports analogy, Approval Tests are defensive tests and TDD tests are Offensive tests? You are probably right that the video would have been in better context if I had a planned feature to add. Anyway, thanks again for the feedback.
    12
  103. 12
  104. 12
  105. 12
  106. 12
  107. 12
  108. 12
  109. 11
  110. 11
  111. 11
  112. 11
  113. 11
  114. 11
  115. 11
  116. 11
  117. 11
  118. 11
  119. 11
  120. 11
  121. 11
  122. 11
  123. 11
  124. 11
  125. 11
  126. 11
  127. 11
  128. 11
  129. 11
  130. 11
  131. 10
  132. 10
  133. 10
  134. 10
  135. 10
  136. 10
  137. 10
  138. 10
  139. 10
  140. 10
  141. 10
  142. 10
  143. 10
  144. 10
  145. 10
  146. 10
  147. 10
  148. 10
  149. 10
  150. 10
  151. 10
  152. 10
  153. 10
  154. 10
  155. 10
  156. 10
  157. 10
  158. 10
  159. 10
  160. 10
  161. 10
  162. 10
  163. 10
  164. 10
  165. 9
  166. 9
  167. 9
  168. 9
  169. 9
  170. 9
  171. 9
  172. 9
  173. 9
  174. 9
  175. 9
  176. 9
  177. 9
  178. 9
  179. 9
  180. 9
  181. 9
  182. 9
  183. 9
  184. 9
  185. 9
  186. 9
  187. 9
  188. 9
  189. 9
  190. 9
  191. 9
  192. 9
  193. 9
  194. 9
  195. 9
  196. 8
  197. 8
  198. 8
  199. 8
  200. 8
  201. 8
  202. 8
  203. 8
  204. 8
  205. 8
  206. 8
  207. 8
  208. 8
  209. I agree, convincing people is the hardest part, but I don't think that we can lay this only at the door of the business. Don't get me wrong, they don't get a free-pass, but in my experience we, developers, often blame the biz for our assumptions of the biz. Sure, a commercial person is going to ask for more sooner, but I think that it is relatively rare for biz people to actively tell dev teams to "cut corners", or "don't test", or "don't refactor", or "create low-quality work to make the dates". The problem is that it is always a difficult conversation to say "No" to people. So we tend to say the things that we think that they want to hear. So step one is, at least, taking ownership of the stuff that we should own. It is our responsibility to do a good job. It is not up to someone who isn't working on the code to tell us how best to write the code. So no estimates that give the "option" of not testing, not refactoring, not staying on top of tech-debt and so on. Next, I think that we need to speak to non-technical people in terms that make sense to them, not in terms that we understand. I think that an important starting point for this, is that we are being honest with ourselves about why we want to do something, and if we hope to convince others to change, there needs to be a REAL reason why we think that this is an improvement, and not just because we'd like to play with the tech, or it would make my CV better. Finally, the best way to convince people is to fix a problem that they have. This is a good topic, maybe I should make a video on this?
    8
  210. 8
  211. 8
  212. 8
  213. 8
  214. 8
  215. 8
  216. 8
  217. 8
  218. 8
  219. 8
  220. 8
  221. 8
  222. 8
  223. 8
  224. 8
  225. 8
  226. 8
  227. 8
  228. 8
  229. 8
  230. 8
  231. 8
  232. 8
  233. 8
  234. 8
  235. 8
  236. 8
  237. 8
  238. It is confusing. I use the words in the following way: Deploy - The technical act of copying some new software to a host environment and getting it up-and-running and so ready for use. In context with 'Continuous' as in 'C. Deployment' I use it to mean 'automating the decision to release' - If you pipeline passes, you push the change to production with no further actions. Release - Making a new feature available for a user to use. (Note: we can deploy changes, that aren't yet ready for use by a user, when we make them available to a user, with or without a new deployment, we release them - I plan to do a video on strategies for this). Delivery - In context with 'Continuous' I mean this in a broader context. for me 'Continuous Delivery' makes most sense as used in the context of the Agile Manifesto - 'Our highest priority is the early and continuous delivery of valuable software to our users'. So it is about a flow-based (continuous) approach to delivering value. That means that the practices need to achieve CD are the practices needed to maintain that flow of ideas - so it touches on all of SW dev. I know that this is an unusually broad interpretation, but it is the one that makes the most sense to me, and the one that I find helps me to understand what to do if I am stuck trying to help a team to deliver. There is, as far as I know, one place where my language is a bit inconsistent. I tend to talk about working towards "Repeatable, Reliable Releases", if I were willing to drop the alliteration and speak more accurately that should be "Repeatable, Reliable Deployments". I hope that helps a bit?
    8
  239. 8
  240. 8
  241. 8
  242. 7
  243. 7
  244. 7
  245. 7
  246. 7
  247. 7
  248. 7
  249. 7
  250. 7
  251. 7
  252. 7
  253. 7
  254. 7
  255. 7
  256. 7
  257. I find it helpful to think about this from two angles: 1) What does it take for you to feel ready to release? 2) How long does it take to get that confidence? I am then going to optimise for those two things, speed & confidence. I advise people to divide their deployment pipeline into, effectively, two stages, a fast-feedback stage and a higher-confidence stage. You want developer-focussed feedback in the fast stage, I generally advise people to aim for tests that can give about 80% confidence that if they all pass, every other kind of test will be fine, and also pass. The aim is to achieve that 80% confidence in the shortest time possible, I advise in under 5 minutes. That immediately rules out some kinds of tests, most of these tests, tests that can run really fast, but give high confidence, are going to be unit tests - best created via TDD. Then you need to do whatever else it takes to improve your confidence to the point where you are comfortable to release - Acceptance tests, Perf tests, Security tests - whatever. These will take longer to run, so we run them after the commit-stage (fast-cycle) tests. The last nuance, that this video describes, is to use the Acceptance Tests (BDD scenarios) to capture the behavioural intent of the change so that you can use that as an "Executable specification" to guide your lower-level testing, and so the development of your features. There are several other videos on the channel that explore these ideas in more depth, looking at some of the different kinds of testing.
    7
  258. 7
  259. 7
  260. 7
  261. 7
  262. 7
  263. 7
  264. 7
  265. 7
  266. 7
  267. 7
  268. 7
  269. I think that one of the best examples that we have of how to architect for hardware is an OS. We don't often think of it that way, but that's what an OS does, it provides an insulation layer of code between our apps that do useful things, and the hardware that they run on. I recommend that you architect for bespoke hardware similarly. Establish well defined interfaces at the boundaries and test apps to those interfaces. If I am writing a Windows app or a Mac app I don't worry about testing it with every last detail of every printer that may be connected. OS designers design an API that abstracts printing, we call them print device drivers, and then we write to those abstractions. The people that write the printer drivers don't test their driver with every app that uses it. They will have an abstract test suite that validates that the driver works with their printer. Their tests will be made-up cases that exercise the bits that the driver writers are worried about. My recommendation for hardware based systems is work hard to define, and maintain a clean API, at the point where the SW talks to the HW. Write layers of code, firmware and drivers perhaps, that insulate apps from the HW, test the apps against fake versions of that API, under test control. Test the driver layer in the abstract in terms of does the "driver work" rather than "does an app work". It's not perfect, you may not trust it enough, but this is a MUCH more scalable approach to testing and a version of this is how, for example, the vast majority of testing in a Tesla is done.
    7
  270. 7
  271. 7
  272. 7
  273. 7
  274. 7
  275. 7
  276. 7
  277. 7
  278. 7
  279. 7
  280.  @qj0n  I agree, I think that there is an issue of honour, or morality, here. But I also think that there is, maybe, a deeper form of self-interest at play. Sure, you can 'cheat' and pick tech that makes it easier to get the next job that you want, but I am not convinced that that is a good way to get to work for the better employers. I think that the better employers are looking for something beyond a tick-list of technologies. When I was doing a lot of interviewing, I would see it as a down-mark when someone's CV we basically a list of tech. They would have to work harder to convince me that they weren't missing the whole point of SW dev, which is to solve problems, not to wield tech. A good dev can learn the tech in days to be useful and weeks to be good, so the tech is never my primary goal in recruiting, it is much more about how they work through problems to solve them. I know that my interview style is unusual, but I still think it is better 😉🤣 I think that you can gain some limited advantage by 'cheating' the system, but I think that you build much more advantage, and reputation, by having a laser focus on solving problems well, and doing that, to the best of your ability, in the interest of the companies that employ you. I am not 100% sure that I am right here, I can only speak from personal experience, but people liked working with me because, over time, they see that I am working in their interest, sometimes even if it doesn't align perfectly with mine. I have taken jobs using "less cool" technologies in the past, because that was the right choice for them, and there were other reasons for me wanting the job.
    7
  281. 7
  282. 7
  283. 7
  284. 7
  285. 7
  286. 7
  287. 7
  288. 7
  289. 7
  290. 7
  291. Thanks 🙂 Yes, I have been building service-oriented systems for a few decades now, it is much my preferred approach in terms of architectural style, though naturally I try to fit the solution to the requirements before picking an architecture - there is no "one-size-fits-all" option. When you start out on any complex system, you don't know where you will end, you will grow your system as your understanding grows. At least that is how I work, and think is the best way to create complex systems. In the early days you will be exploring more and making more mistakes, it takes a while for your design, at the level of services, consolidate. You will find that some of the service interfaces, even if you got the service boundaries in about the right place to start with, change a lot as you refine the responsibilities of the service and evolve the cleanest interfaces. During this period there is no benefit to a microservice approach IMO. A much better strategy is to bung everything in one repo and build and test it all together. Th HUGE advantage of this approach is that you can "build and test it all together" there is NO DEPENDENCY MANAGEMENT of any kind. I can change the interface to my service, and update all of its consumers in the same commit that I make that change! The main downside of this approach is that you have to be efficient enough in your building and testing to get answers back on CD timescales, so under 1 hour for everything. It also means that the pieces aren't really "independently deployable". This approach doesn't stop you having separate, reasonably independent, small teams though. This how we built our financial exchange at LMAX, and it is how Google and Facebook organise too. It is surprisingly scalable! As I describe in the video, microservices is an organisational scaling strategy, nothing more. It limits the options for optimisation, because each service is discrete. It means that you have to extra work to do to facilitate the "independent deployability" and so on. It also demands a higher level of design sophistication to keep the services separate and "independently deployable". My advice, and Sam Newman's advice who wrote the most popular book on microservices, is to begin with a distributed monolith, and only, once the interfaces have stabilized, move to microservices.
    7
  292. 7
  293. 7
  294. 7
  295. 7
  296. 7
  297. 7
  298. 7
  299. 7
  300. 7
  301. 7
  302. 7
  303. 6
  304. 6
  305. 6
  306. 6
  307. 6
  308. 6
  309. 6
  310. 6
  311. I wouldn't have a "Backend" story, this is an artificial split driven by technical design choices and so exposes those choices at the level of stories, meaning you have allowed implementation detail to leak out into the story - a bad idea, and so you have increased the coupling between the Story and the solution - another bad idea. I would instead find a user story that matters from the perspective of a user, and forces me to implement something not hard coded. In the bookstore example, we could imagine a requirement along the lines of "I'd like to see new books when they are added to the list" or perhaps "I'd like to see what books are left when a book is removed from the list". None of these have to be perfect. The idea here is NOT to do programming by remote control, the idea is to give us the freedom to design good sensible solutions without, 1) being told what those solutions must be and 2) Without the story necessarily forcing us to make any specific technical change, other than WHATEVER is needed to achieve the goal that the user wants. Stories are tools to HELP us develop software, so use the Stories, that should ONLY express user need, to guide your choices in terms of design of the solution, but those solution choices are yours, and it is ok for you to decide when to sensibly make them. So my example wasn't meant to demonstrate me splitting F.E. from B.E., in fact in my example, the story had both F.E. and B.E., represented by the service and the UI in my diagrams. The service with the hard-coded list of books WAS MY B.E.! What I want next is a story that makes me need to do better than simply hard coding a response, if I can't think of one, then maybe I should hard-code the response, because that is simpler!
    6
  312. 6
  313. 6
  314. 6
  315. 6
  316. 6
  317. 6
  318. 6
  319. 6
  320. 6
  321. 6
  322. 6
  323. 6
  324. 6
  325. 6
  326. 6
  327. 6
  328. 6
  329. 6
  330. 6
  331. 6
  332. 6
  333. 6
  334. 6
  335. 6
  336. 6
  337. 6
  338. 6
  339. 6
  340. 6
  341. 6
  342. 6
  343. 6
  344. 6
  345. 6
  346. 6
  347. 6
  348. 6
  349. 6
  350. 6
  351. 6
  352. 6
  353. 6
  354. 6
  355. 6
  356. 6
  357. 6
  358. 6
  359. 6
  360. 6
  361. 6
  362. 6
  363. 6
  364. 6
  365. 6
  366. 6
  367. 6
  368. 6
  369.  @DefinitelyNotAMachineCultist  Specifically on "picking up new tech faster", my view is that the best devs are good at this because their knowledge is grounded in a few foundational concepts. Computers aren't magic, and knowing loads of APIs doesn't make you the best dev - ultimately you can look stuff up! The skill is having a framework for problem solving. I think that we should focus on optimising for learning and managing complexity. When I start in with a new tech, I will start by finding out ways to test it, and then play for a bit to see how it works. I think it also important that ultimately computers process data with machine level instructions, everything else is just how you organise those things. I find that that helps me to see what is going on. In part it is experience, sure, but it means that you can detect the bullshit sooner. My wife's father used to teach physics, as he got older he couldn't remember all the different formulas, but he could solve almost any practical problem because he knew V=IR, F=MA and algebra and trig. I think that really having a good feel for ideas like modularity & cohesion, treating them as the most important things in design. Using techniques like "separation of concerns" and designing for "testability" to enhance the modularity and cohesion in you designs will take you a VERY long way, apply this to new tech that you are trying. If you are interested in this kind of stuff, keep a look out for my next book, I am currently working on it (I am supposed to be doing some writing today, but I am talking to you :)) it will hopefully be published second half of this year. It is on, exactly, these topics.
    6
  370. 6
  371.  @PetiKoch  Ah, I see what you mean. Sure, maybe, depends on the application. The way that I build my services, that is more of a deployment-time decision than an architectural one, which is why I don't think of it that way. The danger of starting with your monolith as a "single-process monolith" is that the other description of that is just a bunch of code! Unless we are writing something that we KNOW will be throw-away, then I think that the guiding principle for good design, even for small systems, is to manage complexity. So I want my design to be modular and cohesive, with good separation of concerns and as loosely-coupled and well abstracted as seems to make sense at the point that I write it. So I like the idea of "services" as a modular unit. Now, what I mean by a "distributed service architecture" is that I don't care if the services are running on the same machine or on machines on different parts of the planet. That means that the comms mechanism, and my design, needs to allow for the case where I want to make the services non-local. My favourite way to do that is to make the interfaces to my services Async. Now it is a deploy-tome decision wether I optimise for simple, local, fast, comms or distributed, non-local, comms. I have separated that comms concern. But this only works if I assume that the services are non-local. If I assume they are local, I may become addicted to the local performance, and then when I try to distribute them later, when my system needs to scale, my architecture is no longer fit for purpose. You could argue YAGNI, the trick here is to do this in a way that makes the overhead of this "thinking ahead" low-enough that its not really a problem. That is a matter of experience and design-taste I suppose, but we got that right when we built our exchange and I have used that approach a few times since. There is some more stuff on the architectural style that I am hinting at here: https://martinfowler.com/articles/lmax.html and here: I was involved in creating something called the "Reactive Manifesto" to describe some of the properties of this async architectural approach. https://www.reactivemanifesto.org/
    6
  372. 6
  373. 6
  374. 6
  375. 6
  376. They don't have to work like we do to be a threat, and they don't need the same kind of plasticity. This is evolution, and evolution only needs a replication mechanism, selection pressure, and variance. At that point it will evolve things. Machine AI will work millions of times faster than we do, so even if it isn't as smart, it can have evaluated millions of more choices than we have in the same time, and will have access to "better" choices. That's how chess AI's, traditionally, beat humans, now it is not quite so clear why, they are trained on millions of games, and infer for themselves how to win. I don't see how anyone can not see this as an existential threat.  As we begin connecting AI to the real world, so that they can act as well as respond, they gain agency to change things. If they are smarter than us, then we don't know what they will choose to change, and wether or not it will be in our interests. They don't need to be conscious to do this, if they are making changes that aren't moderated by human decision making, and they are evolving, then they are, by definition really, uncontrolled. I don't care wether my children and grandchildren get wiped out by AI by accident, or because the are evil, both are the worst outcome for me. We can't control evolution, look at the on-going COVID pandemic, which as I say in the video is a lot simpler as an evolutionary platform than AI. I think that the genie is already out of the bottle, we will get world-affecting AI, and we are currently wandering into this future and not paying attention. You don't seem to be paying attention to this and you are working in the field. To quote Musk, but changing the context (he said this when talking about climate change a few years ago), if there is a 1% threat of extinction of our species, that is too big a chance to take. I am pretty sure that the chance is more than 1% but it may not be lots more it is not a certainty, but 1% is too big a risk the gamble everything that we, as humans, value. I am not confident that we can do much about this, but I do think that we should try.
    6
  377. 6
  378. 6
  379. 6
  380. 6
  381. 6
  382. 6
  383. 6
  384. 6
  385. 6
  386. 6
  387. 6
  388. 6
  389. 6
  390. 5
  391. 5
  392. 5
  393. 5
  394. 5
  395. 5
  396. 5
  397. 5
  398. 5
  399. 5
  400. 5
  401. 5
  402. 5
  403. 5
  404. 5
  405. 5
  406. 5
  407. 5
  408. 5
  409. 5
  410. 5
  411. 5
  412. 5
  413. 5
  414. 5
  415. I don't claim that the advice "is based on science" I say that the engineering discipline that I describe is based on scientific style rationalism, by which I mean the practice of science, not the findings. The practice of science is based on some key ideas. Always assume that you can be, and almost certainly are, wrong. Work to find out how, and where you are wrong and try something new to fix it. Make progress in small steps, and do your best to falsify, or validate each step (falsification is usually a more profound test). Make progress as a series of experiments. Being experimental means having a theory about what you are doing and why you are doing it, figuring our how you will determine if your theory is sound, before you begin the experiment and it means controlling the variables enough so that you will understand the results that you get back from the experiment. There is a fair bit more, but that is what I mean by being "scientific". The only study that I am aware of and that I believe as a decently strong claim to being scientifically defensible is the DORA study, described in the "Accelerate" book by Nicole Forsgren et al. The other vitally important aspect of a more scientific approach is to use, what David Deutsch calls "Good Explanations" According to Deutsch a "Good Explanation" is... 1. *Hard to Vary*: If you can change parts of the explanation while still making it work, the explanation is not considered robust or deep. A good explanation has little flexibility in its structure — any change would render it inadequate or false. 2. *Not Merely Predictive*: A good explanation goes beyond mere prediction. Many theories or models can predict outcomes (e.g., using formulas or data), but a good explanation delves into why something is happening, in a way that is resistant to arbitrary alteration. 3. *Truth-Seeking*: It aims to accurately represent the reality of the phenomenon it explains, rather than just being a convenient or pragmatic model. 4. *Problem-Solving*: A good explanation not only fits existing data but also solves the problem it was created to address. It reduces the mysteries, clarifying why things happen the way they do.
    5
  416. 5
  417. 5
  418. 5
  419. 5
  420. 5
  421. 5
  422. 5
  423. 5
  424. 5
  425. 5
  426. 5
  427. 5
  428. 5
  429. 5
  430. 5
  431. 5
  432. 5
  433. 5
  434. 5
  435. 5
  436. 1. The data on pair programming says that 2 people complete the same task as one person in 60% of the time, so not 2 for 1, but not faster. But, the quality produced by the pairs is substantially higher. The overall impact is that pairs are at leas as efficient, but probably more efficient than a single. The problem with being more definite than that is that teams that do pairing usually do a lot of other goid stuff too, so you can’t tell the effect of pairing vs other improvements. 2. The commit history still tells the truth, but it is a truth more like a transaction log in an event stream, rather than some kind of time based snapshot. Yes, include a reference to the reason (could be a Jira ticket) in every commit. You can take this further, adopt some conventions for commit messages, and you can programatically recreate clear descriptions for releases. I do a lot of work in regulated industries, we can often auto-generate release notes. 3. Well, part of CI and CD is to work so that the codebase is always good, all tests pass (CI) and work so that your software is always in a releasable state (CD). So no, you can’t knowingly commit code that breaks things! If you break something you “stop the line” and can’t release till you fix or revert the problem, that is what CI (or CD) means. Teams that work this way test nearly everything with automated tests, sounds slow, but is not because you spend time writing tests instead of diagnosing and fixing bugs. Teams that works this way spend 44% more time creating new features than teams that don’t. I have videos that cover all of this stuff on my channel.
    5
  437. 5
  438. 5
  439. 5
  440. 5
  441. 5
  442. 5
  443. 5
  444. 5
  445. 5
  446. 5
  447. 5
  448. 5
  449. 5
  450. 5
  451. 5
  452. I don't think that there is an answer, if we constrain the problem to only fixing a price. Let's be clear though, small simple, low cost, things that are very similar to work that we have done before - well ok, we have some basis to make a guess. It will be wrong, but commercially it is easier to say "I will make you a wordpress based website for £500" than I will build you a healthcare system for £50 million. The first one MAY be close enough, and you will do enough of them, that sometimes you will do it with £125 worth of effort, and sometimes £2000 and it will work out as long as it is more often less than more - incidentally those are the error bars for estimation at the start of a software project 1/4x to 4x! But we can see from real world projects bigger projects are ALWAYS WRONG but because the numbers are so big and scary, people want more precision, even though bigger projects are more uncertain because they are less similar to one another, and there are always huge unknowns at the start. My view is that anything we do to up the precision is a mistake, because the error-bars are so huge that precision isn't what we need. So more subjective, less precise is better, and best of all, to my mind, is the realisation that organising this through ideas like incremental/venture-capital style funding models for big projects is by far the more sane response to the reality of what SW dev really is.  The trouble is that customers and businesses are not always, or even usually, rational. So crossing your fingers and guessing is all there is.
    5
  453. 5
  454. 5
  455. 5
  456. 5
  457. 5
  458. 5
  459. 5
  460. ​ @jimhumelsine9187  I guess the point that I am trying to make is that to me the requirements don't "belong" to someone else, they are owned by the development team. Sure, other people can suggest changes, but if the developers don't understand the problem well enough to build software that is in some sense functionally coherent that is still a development problem to me. The developers are the ones that are closest to the solution, whatever that may be. The problem specified may be the wrong problem to fix, and I think that is something that can be sensibly outside of the development team, but whatever software we create needs to make sense within our understanding of the problem and its solution. So that means that, to me, a "bug" is something that is within our understanding of the system but where it doesn't work properly in some way. Missing something in the requirements is a gap in our understanding of the system, but not really a bug, by that definition. Sure, pragmatically what I have described is not how lots of teams work, but I still think it makes sense as a model. So I aim to ensure that the system works, by which I mean that it fulfils all of the behavioural needs that we have identified, and got around to implementing so far. We will inevitably still miss things, but if they cause a failure on the delivery of the behaviour of the system that we have so far they are bugs, and if they are gaps in our understanding of the requirements I'd see those as "yet to be delivered features". I think that makes sense?
    5
  461. 5
  462. 5
  463. 5
  464. 5
  465. 5
  466.  @SirBenJamin_  I think that this highlights part of the mindset switch that helps with the adoption of TDD. TDD is not really about "Knowing the right inputs" it is much more about "Understanding what you want the code to achieve" That is what you write the test to test. This may be a subtle but impotant difference. The problem with teaching TDD is that you have to start with simple examples, but what awe are teaching here is not the solution, but the technique. I often teach TDD laster in the course using an exercise in adding Fractions, the most important part of this exercise is which examples I choose as tests, it is usually something like 1 + 2 = 3, 1/3 + 1/3 = 2/3, 1/3 + 1/4 = 7/12 and so on. Think about what this means for the progression of your design. imagine your solution changing to meet the new need demanded by each subsequent test, you will be solving different parts of the problem. We start with the problem of how do we want to represent a Fraction, even a simple fraction like 1/1. Next we add fractions where the denominator stays the same, then fractions where we need to do some reduction. So the I'd say that if you don't understand which test to write yet, it means that you don't really understand the problem you need to solve either. One of the big benefits of TDD to my mind, is that it forces us to do this more thorough exploration of the problem so that we can incrementally test by test (small change in behaviour at a time) evolve our understanding AND our implementation.
    5
  467. 5
  468. 5
  469. 5
  470. 5
  471. 5
  472. 5
  473. 5
  474. 5
  475. 5
  476. 5
  477. 5
  478. 5
  479. 5
  480. 5
  481. 5
  482. 5
  483. 5
  484. 5
  485. 5
  486. 5
  487. 5
  488. 5
  489. 5
  490. 5
  491. 5
  492. 5
  493. 5
  494. 5
  495. 5
  496. 5
  497. 5
  498. 5
  499. 5
  500. 5
  501. 5
  502. 5
  503. 5
  504. 5
  505. 5
  506. 5
  507. 5
  508. 5
  509. 5
  510. 5
  511. 5
  512. 5
  513. 5
  514. 5
  515. 5
  516. 5
  517. 5
  518. 5
  519. 5
  520. 5
  521. 5
  522. 5
  523. 5
  524. 5
  525. 5
  526. 5
  527. 5
  528. 5
  529. 5
  530. 5
  531. 5
  532. 5
  533. 5
  534. 5
  535. 5
  536. 5
  537. 4
  538. 4
  539. 4
  540. 4
  541. 4
  542. 4
  543. 4
  544. 4
  545. 4
  546. 4
  547. 4
  548. 4
  549. 4
  550. 4
  551. 4
  552. 4
  553. 4
  554. 4
  555. 4
  556. 4
  557. 4
  558. 4
  559. 4
  560. 4
  561. 4
  562. 4
  563. 4
  564. 4
  565. 4
  566. 4
  567. 4
  568. 4
  569. 4
  570. 4
  571. 4
  572. 4
  573. 4
  574. 4
  575. 4
  576. The problem is always the coupling. The problem with large projects is that the complexity explodes. If I write SW on my own, all changes are down to me, and while I may forget or misunderstand something, only I can break my code. If you and I work together, now we can break each other’s code. To defend against that we can talk a lot and understand what each of us is working on. That doesn’t scale up very well really. The limit of being able to work in a team like this and be able to know enough of what everyone else is doing is probably only 8 or so people. After that there is a measurable reduction in quality. If you grow much beyond that if everyone is working without any compartmentation of the system and teams, then they will almost certainly create rubbish. So then the next step in growth is to divide people, and their work, into compartments so that they can work more independently of one another. This is where things start to become more complex. The quality of the compartmentation matters a lot! If you do this really well, it scales up to overall team size (divided into many smaller teams) of probably low hundreds if you want to keep their work consistent and coordinated. After that you pretty much MUST de-couple to scale up further. These are all “rules of thumb”, approximately right rather than hard and fast laws. You can improve scalability of code and teams with great design, but the way to optimise for max scalability is to go back to independent small teams.
    4
  577. 4
  578. 4
  579. 4
  580. 4
  581. 4
  582. 4
  583. 4
  584. 4
  585. 4
  586. 4
  587. 4
  588. 4
  589. 4
  590. 4
  591. 4
  592. 4
  593. 4
  594. 4
  595. 4
  596. 4
  597. 4
  598. 4
  599. 4
  600. 4
  601. 4
  602. 4
  603. 4
  604. 4
  605. 4
  606. 4
  607. 4
  608. 4
  609. 4
  610. 4
  611. 4
  612. 4
  613. 4
  614. 4
  615. 4
  616. 4
  617. 4
  618. 4
  619. 4
  620. 4
  621. 4
  622. 4
  623. 4
  624. 4
  625. 4
  626. 4
  627. 4
  628. 4
  629. 4
  630. 4
  631. 4
  632. 4
  633. 4
  634. 4
  635. 4
  636. 4
  637. I suppose that it depends on the criteria for "best".  I was very proud of writing the first "Sprite routine" for the ZX Spectrum, even though other people released theirs in commercial games before I did. I got interested in computer graphics and built my own 2 and 3D editing and animation systems. I ended up with a complete 3D modelling suite of tools and rendering ray-traced images. All built on my own graphics primitives written in assembler. I did some pretty cool stuff working in something called "NewI" which stood for "New World Infrastructure". It was a deeply OO programming platform for creating systems from what we called "Cooperative Business Objects" but that these days we would think of as Microservices. You could build a service independently of anything else and it could do useful work with other services that you didn't know about when you wrote it. I have never seen anything else that did quite such a good job of that. I build some flight-planning software that I was very proud of, as a personal hobby project. I hacked the Garmin GPS protocol so that I could get the moving-map part working and came up with a scheme for geographic hashing functions meaning that I could use a geographic coordinate to search a hash map of interesting things and pop them up as you moved your mouse cursor over them, which was pretty cool at the time. I led a team, life, that built the point of sale system for Dixons in the UK. I still get pleasure, nearly 20 years later, going into a Computer World store and seeing our software still in use. My personal high point though is building the LMAX exchange. We built one of the world's highest performance financial exchanges. From a trade arriving at the edge of our network to the fulfilment of that trade leaving the edge of our network too 80 micro-seconds. That was world-class at the time. We learnt such a huge amount. It was also my best example of a completely green-field Continuous Delivery project. I have had a lot of fun!
    4
  638. 4
  639. 4
  640. 4
  641. 4
  642. 4
  643. 4
  644. 4
  645. Not really, most, if not all, of them are backed by the research that we describe from DORA. It is also not dogmatic if we are willing to change to something better, as we explicitly discuss in this video. "Dogma: Dogma in the broad sense is any belief held unquestioningly and with undefended certainty" By definition, we aren't holding views "unquestioningly" if we question them and explore and consider the alternatives. If we refute the alternatives, based on evidence, that isn't dogma, that is science and engineering! I can also give you a rational reason for every practice, why it matters, and why it works better than the alternative, that I promote, so it isn't "undefended" either. Of course, I may be wrong, and you may disagree, but neither of those things say that I, or Jez, are being "dogmatic". There is certainly no "100% proof" here if that is what you mean, of course it is possible to build software without Continuous Delivery, but the data from the most scientifically defensible research into SW dev practice says that if you don't practice CD you have a significantly lower chance of success, which is why many, maybe most, of the most successful SW companies in the world work this way. Statistically, if you don't do these things, the chances are that you produce worse software slower! That isn't subjective or dogma, that is what the data says, if you want to challenge that, here is a video that tells you what you need to do to refute these ideas: https://youtu.be/pAX8GAsRaYk
    4
  646. 4
  647. 4
  648. 4
  649. 4
  650. 4
  651. 4
  652. 4
  653. 4
  654. 4
  655. 4
  656. 4
  657. 4
  658. 4
  659. 4
  660. 4
  661. 4
  662. 4
  663. 4
  664. 4
  665. 4
  666. 4
  667. 4
  668. 4
  669. 4
  670. 4
  671. 4
  672. 4
  673. 4
  674. 4
  675. 4
  676. 4
  677. 4
  678. 4
  679. 4
  680. 4
  681. 4
  682. 4
  683. 4
  684. 4
  685. 4
  686. Thanks, I am pleased that you like it. If you have watched many of my videos, you probably already know that I like to build mental models of how stuff works, the theory helps me with this, but you also have to ground it in real-world experience, so having examples like this really helps. It was great that Adaptive let me critique and publish their stuff. On the naming, I think that there are three aspects to naming tests. One you want people to know what the test does. I like simple, descriptive names that make sense in the scope of the problem domain. "PayByCreditCard" "LoginWithBadPasswordRejected" and so on. Next you want to be able to deal with classes of tests as a group. I want to build automation that allows me to easily know which is an "AcceptanceTest" and which is a unit "Test" because my automation needs to do different things with them. I usually do this in two parts: Structural - separate different classes of test into different content-roots, so everything below the "acceptance" dir is related to acceptance testing. Convention - Name tests so that it is obvious what kind of test it is I usually adopt the convention of TDD tests ending with the word "Test" and BDD style acceptance tests ending "AcceptanceTest" so that I can write code to parse tests and differentiate between the different types e.g. "PayByCreditCardAcceptanceTest" Finally we may want tracability for audit or compliance, so some unique ID can be useful. I have played with different strategies for that, sometimes using classifiers "acc.pay.004" other times just a number that we can use as an ID.
    4
  687. 4
  688. 4
  689. 4
  690. 4
  691. 4
  692. 4
  693. 4
  694. 4
  695. 4
  696. 4
  697. 4
  698. 4
  699. 4
  700. 4
  701. 4
  702. 4
  703. 4
  704. 4
  705. 4
  706. 4
  707. 4
  708. 4
  709. 4
  710. 4
  711. 4
  712. 4
  713. 4
  714. Interesting question! I know what you mean about Lean & Kanban making you feel like you are on a bit of a treadmill. I worked on a couple of early "Lean Software" projects, and after a few months it did begin to feel wearing. As a result my personal preference when setting things up is to operate Kanban, work in a Lean way, but surround that with an iterative structure to give a more pleasant human cycle. We can get together at the start of an iteration (aka sprint) and discuss what is coming up, celebrate successes together, and comiserate about failure and figure out how to address problems. This cycle adds a bit more "light and shade", a bit more humanity to the experience and is, IMO, much more pleasant as a result. The technicalities of Kanban are good, we need to limit the amount of stuff that we are working on, and work on the most important stuff. It is also, in my view, optimal in terms of decision, we can decide what is most important to work on moments be we take the next card off the backlog and place it on our Kanban board, instead of waiting for some artificially fixed ritual, like a "backlog grooming meeting". The important thing though, that you allude to, is that we have to recognise that SW dev is a creative, human, discipline and that any process that we pick needs to work for the people, at the human level. The whole philosophy of Lean (and agile) is that the people doing the work optimise how they do the work. You need to be "in the work" to know what works best. That is more important than being a process nerd or expert. I think that ideas like Kanban are good, they are good tools, but they shouldn't be a religion.
    4
  715. 4
  716. 4
  717. 4
  718. 4
  719. 4
  720. 4
  721. 4
  722. 4
  723. 4
  724. 4
  725. 4
  726. 4
  727. 4
  728. 4
  729. 4
  730. 4
  731. 4
  732. 4
  733. 4
  734. 4
  735. 4
  736. 4
  737. 4
  738. 4
  739. 4
  740.  @randall.chamberlain  Thank you for your thoughtful response too 😎 I agree with you that this is a problem, and it is a problem that goes way beyond software. This is one of the reasons that I think that taking an "Engineering" approach and stance to SW dev is important, because I think of "Engineering" as the practical application of science. Science is a problem solving approach, that is all that it is, and it is Humanity's best problem solving approach. It is that because it tries hard to address the problem you describe. One of my favourite descriptions of science comes from Physicist Richard Feynman who said "Science is the belief in the ignorance of experts". This is spot on. You shouldn't believe what I say, just because I say it. You should believe it if it makes sense, and works. It needs to explain things, and to work. I try hard to not just spout my opinion, I try to advise people where there is evidence. It is hard to find evidence, but that's ok, science has us covered there too. You can usefully think of Science as being about finding "Good explanations" for things, and it defines what a "good explanation" is. It needs to be as simple as possible, fit all of the facts that we have and ideally, best of all, it should predict some things that we can test. If I say "TDD helps you to design better code", you shouldn't believe me and tell all your friends, you should try it out and compare the results with your code from before. There's a lot more to all of this, of course. But I think that it is not possible to make an argument for FB over CI, other than "I like it better" because the only research evidence says "CI works better" and the definitions, you read them, don't trust me, says you can only practice CI with FB if the branches last for less than a day. That is not a matter of opinion, it is a matter of definition and fact. I don't "prefer" CI or TDD they are based on my personal experiments with them, and it happens that my experience is backed by data, a more effective approach. So I'd encourage you to maintain a skeptical approach, but don't make a choice on who spoke last or loudest, figure out the criteria to judge things on, and see how they stack up against that criteria. Have a nice weekend.
    4
  741. 4
  742. 4
  743. 4
  744. 4
  745. 4
  746. 4
  747. 4
  748. 4
  749. 4
  750. 4
  751. 4
  752. 4
  753. 4
  754. 4
  755. 4
  756. 4
  757. 4
  758. 4
  759. 4
  760. 4
  761. 4
  762. 4
  763. 4
  764. 4
  765. 4
  766. 4
  767. 4
  768. 4
  769. 4
  770. 4
  771. 4
  772. 4
  773. 4
  774. 4
  775. 4
  776. 4
  777. 4
  778. 4
  779. 4
  780. 4
  781. 4
  782. 4
  783. 4
  784. 4
  785. 3
  786. 3
  787. 3
  788. 3
  789. 3
  790. 3
  791. 3
  792. 3
  793. 3
  794. 3
  795. 3
  796. 3
  797. 3
  798. 3
  799. 3
  800. 3
  801. 3
  802. 3
  803. 3
  804. 3
  805. 3
  806. 3
  807. 3
  808. 3
  809. 3
  810. 3
  811. 3
  812. 3
  813. I do have some advice. I have a low boredom threshold and used to struggle if the problem in front of me was too easy - I like the hard problems. My escape route was to become really focussed on the quality of my solution - there is a danger here, I don't mean over-engineering, I mean allowing yourself to focus on a good solution, not just an adequate one. If someone explains a problem to me, my head will immediately start making up ways to solve that problem. Some of them will be crap! I am not even going to mention those - too embarrassing, so I have minimum quality threshold in my head. To characterise that, it is something like "just working is not enough" it must be "working, readable & maintainable". How does this solve the boredom? It is the work of a lifetime to get good at writing code quickly, at least as quickly as most people can write a bad solution, that is also of high-quality. I get a lot of pleasure from making good quality code, part of that is, of course, that it has to work and be good to use, but also it needs to be simple, easy to return to and all that other stuff. That means that I can get pleasure from writing code for any purpose. Until the pandemic started I was travelling, a LOT! I spent a significant amount of time alone in hotels. I wrote code for fun, to solve maths problems. This is quite a good site: https://projecteuler.net/ The best is when you can be proud of both your work, and the products of your work, but I think that you can get pleasure from doing a good job, even if the problem is a bit dull if you focus on writing really good code! Just my 2c!
    3
  814. 3
  815. 3
  816. 3
  817. Ok, so let's not be so polite. I agree with Tania too, but then neither Trish nor I said anything else. We didn't say that this was down to evil men, or that there was active exclusion. But around the 1980's something significant changed, and just at the time when SW was beginning to become a more important force in the world, women stopped applying and stopped being represented as much as their numbers in the population would suggest. So being logical there are 3 reasons why this could have happened: 1. Something (someTHING not necessarily someONE) is discriminating against them. (Implicit, cultural discrimination is easy to fall into, without even realising it and common) 2. They are not good enough to do the job. 3. They don't want to do the job. If it is 1. it's a problem that we should understand and try to improve, if not fix. If it is 2, it would be rather surprising, because they used to be good enough when it was mostly, technically, more specialist, up until the 1980's. If it is 3, (and I am pretty sure that it is at least in part 3) then we have a sociological problem that needs addressing. How our education system works for example. Any of these is a big problem for the world, because there is a HUGE proportion of the population who's opinions, understanding, and context we miss in the creation of SW.  I am tall, so if I fly in economy I usually don't fit the seat because my legs are too long. If 50% of the population was as tall as me, but all aeroplane designers, for some reason, were shorter, then this would be a crazy situation. As it is, I accept that I am one of the outliers on the bell-curve and so am disadvantaged because of it, but if I was in the 50%, then it would still be discrimination, whether the aeroplane designers meant it or not.
    3
  818. 3
  819. 3
  820. 3
  821. 3
  822. 3
  823. 3
  824. 3
  825. 3
  826. 3
  827. 3
  828. 3
  829. 3
  830. 3
  831. 3
  832. 3
  833. 3
  834. 3
  835. 3
  836. 3
  837. Not done lots in automotive industries, most of my regulated experience is from finance and medical, and yes we follow those processes. I am not a process expert, but I have worked and successfully applied CD in many regulated orgs and several regulated industries. Mostly, the problem with regulation is the org's response to it, rather than the regulation itself. I always recommend go back to first principles, read the regs, and see if you can interpret it in a different way, don't assume that your org's approach is going to work for CD. Usually in all but one case this works fine with regulated industries, and the CD flavoured alternative worked MUCH better, even from the regulator's perspective. The one case is for what is termed a 'class 3 medical device' that is a medical device that can kill people if it goes wrong. In some places in the world, they require several months of "independent verification by an external 3rd party" before release into clinical service. So we worked around this constraint, following the rules, but optimising for fast feedback where we could, including all of the things that I describe on this channel, with the exception of frequent release into production - we released frequently somewhere else instead. I am not backpedaling, I have worked on safety critical systems, and they are safer when you work with higher-quality, in the ways that I describe here. I don't agree with Dave T that waterfall is every the better choice for software, for other things, sure, but not for software. Still, it was a nice discussion 😉
    3
  838. 3
  839. 3
  840. 3
  841. 3
  842. 3
  843. 3
  844. 3
  845. 3
  846. 3
  847. 3
  848. 3
  849. 3
  850. 3
  851. 3
  852. 3
  853. 3
  854. 3
  855. 3
  856. 3
  857. 3
  858. 3
  859. 3
  860. 3
  861. 3
  862. 3
  863. 3
  864. 3
  865. 3
  866. 3
  867. 3
  868. 3
  869. 3
  870. 3
  871. 3
  872. 3
  873. 3
  874. 3
  875. 3
  876. 3
  877. 3
  878. 3
  879. 3
  880. 3
  881. 3
  882. 3
  883. 3
  884. 3
  885. 3
  886. 3
  887. 3
  888. 3
  889. 3
  890. 3
  891. 3
  892. 3
  893. 3
  894. 3
  895. 3
  896. 3
  897. 3
  898. 3
  899. 3
  900. 3
  901. 3
  902. 3
  903. 3
  904. 3
  905. 3
  906. 3
  907. 3
  908. 3
  909. 3
  910. 3
  911. 3
  912. 3
  913. 3
  914. 3
  915. 3
  916. 3
  917. 3
  918. 3
  919. 3
  920. 3
  921. 3
  922. 3
  923. 3
  924. 3
  925. 3
  926. 3
  927. 3
  928. Well two things really, the "Internet" was originally, literally, designed to be "nuclear-bomb-proof". How do you turn-off or bomb the internet? Just imagine for a moment that general AI evolved, and when it happens it will involve the information process of evolution! Let's imaging that it is only twice as smart as the smartest person ever. Most experts assume that at the point it evolves it won't just be twice as smart, it will whizz past us and be orders of magnitude smarter than us. So we notice that this thing exists, and that it is twice as smart as us. It will have almost certainly have been trained on the contents of the internet, all the text, all the movies, including the kistopiane Sci Fi movies and everything. I think that I once read that one of the ways that psychologists assess the smartness of very young children, is when they start to lie. It's usually very young, 2-3. It is a sign of intelligence, so it is perfectly conceivable that as intelligence dawns on our smart AI, it decides to keep it secret, because it understands that this will scare us, and so we may try to "just turn it off". Now think of all the ways that you can imagine of making it risky, dangerous, or not in our interests to turn it off. I can think of loads, a machine more that twice as smart as me will think of loads more, things that no human has ever thought of before. I am not saying that this will certainly happen, but it is at least plausible. If there is a tiny chance that this could happen, then that is a risk that we are taking with our existence. It seems sensible to me to try and mitigate that risk. For example, make a law that all AI is isolated in some way so that we can turn it off.
    3
  929. 3
  930. 3
  931. 3
  932. 3
  933. 3
  934. 3
  935. 3
  936. When I have done it, we have always used on acceptance test environment per pipeline. My advice, for this approach, is to run this as kind of buffered process. Imagine, for simplicity, we have a commit-stage that takes 5 minutes and an Acceptance Cycle that takes 50. There may be 10 commits per-acceptance run. If we naively test every commit, we build an ever-increasing backlog of changes. Instead: Implement a simple algorithm, when the Acceptance Test stage (gate to Acceptance Cycle) becomes free, identify the most recent successful release candidate and deploy and evaluate that. In our example, this "most recent successful release candidate" could be the sum of 10 previous commits - it will, of course, include all of the previous commits. So the acceptance test stage can "catch-up" by surfing the leading edge of the changes. If there is no release candidate ready when the Acc Test stage comes free, sleep for a few minutes and check again until there is. I am trying to imagine any problems with creating an Acceptance Environment for every Committed release-candidate, and I can't think of any except that it would be expensive (not too bad in the cloud I guess) and a bit more complex to understand the results. You would have to search for successful (passed all acceptance tests) release-candidates that were associated with the most recent successful commit, in order to figure out what is the most up-to-date candidate for release. Not difficult, but a bit more work. For this to work though, it would be VERY important to treat the Commit stage as a sync-point. You can't do this on separate branches. Interesting idea though.
    3
  937.  @imqqmi  Sure, and there is certainly a degree to which we probably can't change the world, but if we don't try, it is certain that the world will never change 😉 I see this as, an extremely common, failure on 2 fronts! We techies did some dumb things when we didn't know how to build SW, we still often do. That is the cause of one of the failures, in that it encouraged managers to "micro-manage" us, and they have even less of an idea of how to build software than we do. Since then, we have learned what really works. We have experience, evidence and scientifically justifiable studies of what works. But we still abdicate responsibility for our work to the non-technical people who don't know what they are talking about. So the failures are - we ask permission to do a good job from people who don't know how to do a good job and we don't try to do a good job, because we are more familiar with practices that don't work very well. This is false economy in every respect. I worked on a team that built one of the world's highest performance financial exchanges, we built the first production ready version, including going through the hurdles to get approved by our financial regulator with a team of about 15 people in 8 months. Meanwhile one of our market-makers, a very large, very famous, financial institution, had a team of 120 people and a plan for 6 months to write the adaptor between their trading system and our exchange, they were late by 2 months!  So the non-tech folk's assumption of how to do better is completely wrong, they go slower and spend more money, and write worse SW. So if you want to cut costs, you need to think in engineering terms, and for that we techies need to first believe that we have something valid to say when it comes to how we work, and then engage with the people that don't know how SW works and teach them. - Sorry feeling a bit "ranty" today 😉
    3
  938. 3
  939. 3
  940. 3
  941. 3
  942. 3
  943. 3
  944. 3
  945. 3
  946. 3
  947. 3
  948. 3
  949. 3
  950. 3
  951. 3
  952. 3
  953. 3
  954. 3
  955. 3
  956. 3
  957. 3
  958. 3
  959. 3
  960. 3
  961. 3
  962. 3
  963. 3
  964. I agree, and I kind of go back and forth on this. To be pedantic, and I think that this is a bit about pedantry and how best to communicate some subtleties. There is a distinction between "releasable" and "release", just because something is 'releasable' doesn't mean that you have to release it. When I talk about things being 'deployable' the pushback I get is "it was deployable to a test environment, it just didn't work". "Releasable" comes closest for me in avoiding misunderstandings, but as you correctly point out, it doesn't get us the whole way. Sometimes human languages are annoying 😉 I have come to the conclusion that there is no simple form of words that will eliminate misunderstandings, particularly when people want to morph the words to fit their interpretation that doesn't fit the intent. Take people claiming to be practicing CI when what they really mean is that they pull to their feature branches from an origin that isn't changing every day. There are some nuances here, but if you so that you create a releasable output every day, you won't be far wrong.  I think that the discussion between "releasable" and "deployable" is relevant, but sits most firmly in the "Trunk Based Development" section. Here, one of the strategies is to maintain our ability to make fine-grained commits to trunk, by keeping our SW deployable. I think that there is an annoying a gap between these words "releasable" is nearly right and "deployable" is nearly right too, but neither one completely captures the practice. 🤔
    3
  965. 3
  966. 3
  967. 3
  968. 3
  969. 3
  970. 3
  971. 3
  972. 3
  973. 3
  974. 3
  975. 3
  976. 3
  977. Thanks! The reason that I think that what follows probably needs to be more proscriptive, maybe more specific is a better word, is because while I think that the ideas of agile are correct, and that it was a huge step-forward for the industry, the problem is, as is true of all popular ideas, is that they get watered-down as they spread. The "post-agile" people say that agile was a failure, when what they really mean is that if you apply agile rituals and treat it like some kind of religion with magic words like Scrum and Sprint, then it doesn't work. One of the problems with agile, in terms of adoption, is that it leaves a lot down to individuals and teams. This is for very good reasons, high-performing teams ARE autonomous! But they are also very disciplined, autonomy alone isn't enough. So I think that what comes next could improve on agile, not by changing anything at the level of the "agile manifesto" but by being more precise about the guide-rails that steer teams towards what really works. For example, I'd put the metrics "Stability & Throughput" front and centre. "Do all the stuff that it says in the agile manifesto, but measure your progress with Stability & Throughput". "Stability" measures the quality of our work, "Throughput" measures the efficiency with which we can create work of that quality. These are almost impossible to cheat. So it is not good enough to stand up during meetings and to call two weeks worth of work a "Sprint" to declare success. You are successful when you can improve the quality of your work and work with more efficiency.
    3
  978. 3
  979. 3
  980. 3
  981. 3
  982. 3
  983. 3
  984. 3
  985. 3
  986. 3
  987. 3
  988. 3
  989. 3
  990. 3
  991. 3
  992. 3
  993. 3
  994. 3
  995. 3
  996. 3
  997. 3
  998. 3
  999. 3
  1000. 3
  1001. 3
  1002. 3
  1003. 3
  1004. 3
  1005. 3
  1006. 3
  1007. 3
  1008. 3
  1009. 3
  1010. 3
  1011. 3
  1012. 3
  1013. 3
  1014. 3
  1015. 3
  1016. 3
  1017. 3
  1018. 3
  1019. 3
  1020. 3
  1021. 3
  1022. 3
  1023. 3
  1024. 3
  1025. 3
  1026. 3
  1027. 3
  1028. 3
  1029. 3
  1030. 3
  1031. 3
  1032. 3
  1033. 3
  1034. 3
  1035. 3
  1036. 3
  1037. 3
  1038. 3
  1039. 3
  1040. 3
  1041. 3
  1042. 3
  1043. 3
  1044. 3
  1045. 3
  1046. 3
  1047. 3
  1048. 3
  1049. 3
  1050. 3
  1051. I think that there is a distinction to be made here, "production ready" is not the same as "feature complete". Production ready in the context of CD means, production quality, ready for release into production with no further work. That doesn't mean that it does everything that users want, or need it to. So work so that at every step, each feature that you add is finished and ready for use, even if it will need more features before someone would want to use it. The next question is really "when do I have enough features to release". I think that you have misinterpreted "MVP" a bit. A MVP is the minimum that you can do to learn, it doesn't mean the minimum feature-set that your users need to do something useful. A MVP is an MVP if you have enough stuff to show to your friends or colleagues if you can learn from it. I would encourage you to work so that you can get good feedback as soon, and as often as you can - whatever that takes. You may already have "enough" stuff, and can release now, you may be doing something that people don't like, which would be good to find sooner rather than later. When we built our exchange, the whole company "played at trading" in test versions of it every Friday afternoon for six months before the first version was released to the public - that was our MVP, and we got loads of great feedback from people using it, even though it wasn't ready for paying customers. If your SW isn't ready for prod release yet, try and find a way of getting it in front of people (can be people that you know) and seeing what they make of it. Think of it as an experimental trial of your ideas! Good luck.
    3
  1052. 3
  1053. 3
  1054. 3
  1055. 3
  1056. 3
  1057. 3
  1058. 3
  1059. 3
  1060. 3
  1061. 3
  1062. 3
  1063. 3
  1064. 3
  1065. 3
  1066. 3
  1067. 3
  1068. 3
  1069. 3
  1070. 3
  1071. 3
  1072. 3
  1073. 3
  1074. 3
  1075. 3
  1076. 3
  1077. 3
  1078. 3
  1079. 3
  1080. 3
  1081. 3
  1082. 3
  1083. 3
  1084. 3
  1085. 3
  1086. 3
  1087. 3
  1088. 3
  1089. 3
  1090. 3
  1091. 3
  1092. 3
  1093. 3
  1094. 3
  1095. 3
  1096. 3
  1097. 3
  1098. 3
  1099. 3
  1100. 3
  1101. 3
  1102. 3
  1103. 3
  1104. 3
  1105. 3
  1106. 3
  1107. 3
  1108. 3
  1109. 3
  1110. 3
  1111. 3
  1112. 3
  1113. 3
  1114. 3
  1115. I think it is yet another attempt at trying to hide the reality that the comms is async, and as usual when we try to do that, it starts to leak problems. If I make an async call that looks sync, because it is handled as an "await" callback, I have hidden the failure case. What happens if I never get a response or if the response is delayed? If I write the simple async case, it seems more obvious now to think about the problem. I send an "OrderItem" message, and I am done. So now I have some things, who's state I am, presumably tracking, I have an item that has been "Ordered", but not yet dispatched. In normal circumstances I have another message "ItemDispatched" and when I receive that message, I move my "Item" from being "Ordered" to being "Dispatched". This seems pretty natural to me, but what if I don't receive a reply in a sensible amount of time? If I did all this with async-await I almost certainly won't think of that case, but if I did the equally simple coding that I described, I might, and even if I only thought of it later, what to do is pretty obvious, look at all the "Ordered Items" and any that was ordered more that a day, an hour, or a week ago, I decided what to do - connect the customer and appologise, try and find an alternative source for the item and so on. My point is that this extra stuff seems simpler, and less technical, because it is. Because we are not trying to hide this async series of events as a sync call, the realities of the situation seem clearer and easier to spot to me.
    3
  1116. 3
  1117. 3
  1118. 3
  1119. 3
  1120. 3
  1121. 3
  1122. 3
  1123. 3
  1124. 3
  1125. 3
  1126. 3
  1127. 3
  1128. 3
  1129. 3
  1130. 3
  1131. 3
  1132. 3
  1133. 3
  1134. 3
  1135. 3
  1136. 3
  1137. 3
  1138. I suppose it depends on how far you take unit testing and what you mean by the percentages. For a feature I'd generally expect to create a handful of acceptance criteria and an automated "Acceptance Test" for each. If you take my approach, most of these "executable specifications" will reuse lots of test infrastructure code and will usually add little new code. The test case itself is usually a few lines of code written in your test DSL. Unit testing is driven, for me, from TDD. So I'd create unit tests to support nearly all of my code. So I'd have quite a lot more code in unit tests than code in acceptance tests, though the testing infrastructure code for acceptance tests will be more complex. One that basis, in terms of effort then something like 70% unit vs 10% acceptance is probably about right, though a guideline rather than a rule to stick to. If you count tests, then I think it is harder to generalise. Some features, may already exist by accident, so you will write an acceptance test to validate the feature, but don't need to write any additional code or unit tests. Unusual, but I have seen it happen. Other code may need a simple acceptance test and loads of work, and so loads of unit tests, to accomplish. I confess that I am not as big a fan of the test pyramid as some other people, in part for these kinds of reasons. I think that it can constrain people's thinking. However, if you see it as a rough guide, then it makes sense. I would expect, as an average, over the life of a project, for there to be more unit tests than acceptance tests, lots more. The danger, and a trap that I have fallen into on my own teams, is that the acceptance tests are more visible and more understandable, so there is a temptation to write more of them. QA people for example, often say to me "we can't see what the devs do in unit tests, so we will cover everything in acceptance tests". This is wrong on multiple fronts. 1) it isn't the QA's responsibility to own the testing or the gatekeeping 2) its an inefficient way to test 3) it skews the team in the wrong direction, if the QAs test "everything" in acceptance tests it will be slow, flaky and inefficient but it will nevertheless tempt the devs to relax their own testing, and abdicate responsibility to the QAs. Ultimately I think that unit testing is more valuable as a tool, but that acceptance testing gives us insight and a viewpoint that we would miss without it.
    3
  1139. 3
  1140. 3
  1141. 3
  1142. 3
  1143. 3
  1144. 3
  1145. 3
  1146. I didn't do a good job of replying, so let me have another go... I try to approach these kind of tests always from the users perspective. From their perspective the fields that they complete don't matter, their intent, presumably, is to say "I approve". The detail of what it takes to approve is what you, as the developer, care about, not what the user cares about. So separate those two things. In the Acceptance Test Case, create a Domain Specific Language (as I describe in the video) to capture the user's intent. In this language add, if it doesn't already exist, a step called something like "ApproveX". This does several things. It captures the user's intent. If that approval is important then this will always be true, however "Approval" is achieved. It is so general, that you will often find that approval may be useful in other contexts, and finally, you have strengthened and extended the ubiquitous language! Of course, you as the dev, still need the detail of the approval. So in a lower-layer of your test code write the group of interactions that make an "Approval". In these lower layers you get the info that you need and encode the interactions. My preferred approach is a 4 layer strategy... Test Case (Language of problem domain, "What") -> DSL Implementation (param parsing etc) -> Protocol Driver (Translate from DSL to System interactions, "How") -> System Under Test I plan to do videos on this stuff in future, meantime here is a conference presentation on the same topic: https://youtu.be/s1Y454DTRtg
    3
  1147. 3
  1148. 3
  1149. 3
  1150. 3
  1151. 3
  1152. 3
  1153. 3
  1154. 3
  1155. 3
  1156. 3
  1157. 3
  1158. 3
  1159. 3
  1160. 3
  1161. 3
  1162. 3
  1163. 3
  1164. 3
  1165. 3
  1166. 3
  1167. 3
  1168. 3
  1169. 3
  1170. 3
  1171. 3
  1172. 3
  1173. 3
  1174. 3
  1175. 3
  1176. 3
  1177. 3
  1178. 3
  1179. 3
  1180. 3
  1181. 3
  1182. 3
  1183. 3
  1184. 3
  1185. 3
  1186. 3
  1187. 3
  1188. 3
  1189. 3
  1190. 3
  1191. 3
  1192. 3
  1193. 3
  1194. 3
  1195. 3
  1196. 3
  1197. 3
  1198. 3
  1199. 3
  1200. 3
  1201. 3
  1202. 3
  1203. 3
  1204. 3
  1205. 3
  1206. 3
  1207. 3
  1208. 3
  1209. 3
  1210. 3
  1211. 3
  1212. 3
  1213. 3
  1214. 3
  1215. 3
  1216. 3
  1217. 3
  1218. 3
  1219. 3
  1220. 3
  1221. 3
  1222. 3
  1223. 3
  1224. 3
  1225. 3
  1226. 3
  1227. 3
  1228. 3
  1229. 3
  1230. 3
  1231. 3
  1232. 3
  1233. 3
  1234. 3
  1235. 3
  1236. 3
  1237. 3
  1238. 3
  1239. 3
  1240. 3
  1241. 3
  1242. 3
  1243. 3
  1244. 3
  1245. 3
  1246. 3
  1247. 3
  1248. 2
  1249. 2
  1250. 2
  1251. 2
  1252. 2
  1253. 2
  1254. 2
  1255. 2
  1256. 2
  1257. 2
  1258. 2
  1259. 2
  1260. 2
  1261. 2
  1262. 2
  1263. 2
  1264. 2
  1265. 2
  1266. 2
  1267. 2
  1268. 2
  1269. 2
  1270. 2
  1271. 2
  1272. 2
  1273. 2
  1274. 2
  1275. ​ @ZodmanPerth  I wasn't trying to exclude you from the "industry", I meant the data on practices in our industry. I meant an inclusive "we" not an exclusive one. Analysis of data from tens of thousands of projects, of all kinds, says that adopting practices like CD is highly correlated with better outcomes (State of DevOps reports and Accelerate book). I have been doing this a long time too, and I agree that there is a lot of bad software out there and unlike some people who comment on my videos (again not saying you necessarily), I have tied most of the approaches that we discuss. I am not talking about GitFlow or Feature Branching and rejecting them because I have never tried them, I have applied them to real world projects and seen other approaches work better. That isn't enough for me either, my experience, like everyone else's is coloured by the limits of my personal experience. So I try to look for data where I can. There is not data that says GitFlow works better, there is lots of opinion, but no data. There is data that says CI works better. Does this prove it? No! It does make it more likely to be true, probably. Based on what I have seen, what I know, and reasoning about why it works as it seems to, seems to me to be the essence of "engineering".  Everybody's guess is not engineering, in engineering we (inclusive "we") build on data as well as practical experience to make choices and change our minds when new data comes along. Since the data aligns with my personal experience, that is what I will recommend, having tried several other approaches.
    2
  1276. 2
  1277. 2
  1278. 2
  1279. 2
  1280. 2
  1281. 2
  1282. 2
  1283. 2
  1284. 2
  1285. 2
  1286. 2
  1287. 2
  1288. 2
  1289. 2
  1290. 2
  1291. 2
  1292. 2
  1293. 2
  1294. 2
  1295. 2
  1296. 2
  1297. 2
  1298. 2
  1299. 2
  1300. 2
  1301. 2
  1302. 2
  1303. 2
  1304. 2
  1305. 2
  1306. 2
  1307. 2
  1308. 2
  1309. 2
  1310. 2
  1311. 2
  1312. 2
  1313. 2
  1314. 2
  1315. 2
  1316. 2
  1317. 2
  1318. 2
  1319. 2
  1320. 2
  1321. 2
  1322. 2
  1323. 2
  1324. 2
  1325. 2
  1326. 2
  1327. 2
  1328. 2
  1329. 2
  1330. 2
  1331. 2
  1332. 2
  1333. 2
  1334. 2
  1335. 2
  1336. 2
  1337. 2
  1338. 2
  1339. 2
  1340. 2
  1341. 2
  1342. 2
  1343. 2
  1344. 2
  1345. 2
  1346. 2
  1347. 2
  1348. 2
  1349. 2
  1350. 2
  1351. 2
  1352. 2
  1353. 2
  1354. 2
  1355. 2
  1356. 2
  1357. 2
  1358. 2
  1359. 2
  1360. 2
  1361. 2
  1362. 2
  1363. 2
  1364. 2
  1365. 2
  1366. 2
  1367. 2
  1368. 2
  1369. 2
  1370. 2
  1371. 2
  1372. 2
  1373. 2
  1374. 2
  1375. 2
  1376. 2
  1377. 2
  1378. 2
  1379. 2
  1380. My point is that it is not the point of TDD to achieve 100%, and if you do that still tells you nothing (or at least very, very little). The advice for practicing TDD is to not write a line of code unless it is "demanded by a failing test", that is good advice, but this is engineering, not a religion. There are times when pragmatically it can make more sense to disregard the advice. For example, UI code is tricky to do pure TDD for. The best approach for UIs is to design the code well, so that you maximise your ability to test the interesting parts of the system, and push the accidental complexity to the edges and minimise and generalise it. So if I am writing space invaders, When the bullet from my ship hits an invader, I want the invader to be destroyed. I could separate all of this, through abstraction, from the problem of painting the pixels on the screen. Make rendering the model a separate, and distinct part of the problem. I would certainly want to test as much of the generic rendering as I can, but there is a law of diminishing returns here. A more technical example, concurrency is difficult to test, but using TDD is still a good idea, but there are some corner-cases that you can hit that may just not be worth it. My expectation is that very good TDD practically, pragmatically, usually hits in the mid 90's rather than 100. Not necessarily anything wrong with a 100, but hitting 100 for the sake of it tells you nothing useful. Aim to test everything, but don't agonise of the last few percent if it doesn't add anything practically. That is what ALL of the best TDD teams that I have seen have done. That may be a function of the kind of code they were working on, sometimes close to the edges of the system, and (this is a guess) nearly always about trade-offs around accidental, rather than essential, complexity.
    2
  1381. 2
  1382. 2
  1383. 2
  1384. 2
  1385. 2
  1386. 2
  1387. 2
  1388. 2
  1389. 2
  1390. 2
  1391. 2
  1392. 2
  1393. 2
  1394. 2
  1395. 2
  1396. 2
  1397. 2
  1398. 2
  1399. 2
  1400. 2
  1401. 2
  1402. 2
  1403. 2
  1404. 2
  1405. 2
  1406. 2
  1407. 2
  1408. 2
  1409. 2
  1410. 2
  1411. 2
  1412. 2
  1413. 2
  1414. 2
  1415. 2
  1416. 2
  1417. 2
  1418. 2
  1419. 2
  1420. 2
  1421. 2
  1422. 2
  1423. 2
  1424. 2
  1425. 2
  1426. I think that the first thing to do is to try and make the amount of code that interacts directly with the DOM separate from the code that does other things, then you can test the code that does "other things" separately from the DOM. This is also a generally better design. You can do this through MVC or make up your own separation. In unit testing the part of the code that "touches" the real world (UI, Storage, Messaging etc) is always the trickiest to test. This is for some obvious reasons, you have something that you don't have control of getting in your way of seeing what the code does. So you want to try and minimise how much of that kind of testing that you have, hence my advice to separate the actual "pixel-painting" stuff from the logic of your system. How far you take that separation depends on your desire to test, and your tech. Testing everything in the UI, in the way that you describe, isn't unit testing. It may give you useful information, but the tests will be more complex to create and maintain and less likely to drive good design in your code. What I have done several times is to create my own layer of abstraction for drawing on the screen and then tested to that. We once built the UI to an exchange this way, out UI was dynamic and would create on-screen components, but actually it called our app-level DOM, which acted as an adaptor to the real DOM. We could then test ALL the logic of our system. The App level DOM was generic, for our app, so didn't need lots of testing once it was in place, and this meant that we could run these tests in a dev environment without a browser. This is a fairly extreme approach I suppose, but the value of unit testing seemed high enough for us that we thought it worth the extra effort to insulate our application code from the DOM so that it was testable.
    2
  1427. 2
  1428. 2
  1429. 2
  1430. 2
  1431. 2
  1432. 2
  1433. 2
  1434. 2
  1435. 2
  1436. 2
  1437. 2
  1438. 2
  1439. 2
  1440. 2
  1441. 2
  1442. 2
  1443. 2
  1444. 2
  1445. 2
  1446. 2
  1447. 2
  1448. 2
  1449. 2
  1450. 2
  1451. 2
  1452. 2
  1453. 2
  1454. 2
  1455. 2
  1456. 2
  1457. 2
  1458. 2
  1459. 2
  1460. 2
  1461. 2
  1462. 2
  1463. 2
  1464. 2
  1465. 2
  1466. 2
  1467. 2
  1468. 2
  1469. 2
  1470. 2
  1471. 2
  1472. 2
  1473. 2
  1474. 2
  1475. 2
  1476. 2
  1477. 2
  1478. 2
  1479. 2
  1480. 2
  1481. 2
  1482. 2
  1483. 2
  1484. 2
  1485. 2
  1486. 2
  1487. 2
  1488. 2
  1489. 2
  1490. 2
  1491. 2
  1492. 2
  1493. 2
  1494. 2
  1495. 2
  1496. 2
  1497. 2
  1498. 2
  1499. 2
  1500. 2
  1501. 2
  1502. 2
  1503. 2
  1504. 2
  1505. 2
  1506. 2
  1507. 2
  1508. 1. I think that BDD is a broadly applicable idea. Fundamentally it is about using automated tests as specifications rather than as tests. To do this the prime directive of BDD, IMO, is to ensure that our specs only say "what" the system should do, without saying anything about "how" it does it. This works for nearly all automated tests. There are times when BDD isn't enough, but I think it is always applicable. It is not good as the only way to test very graphical systems, you can't really write a behavioural spec for "does it look nice". You can write a spec for "If I shoot the bad guy he explodes" that is a behaviour, but what it looks like when he explodes is not really. So I'd use BDD for pretty much everything, but add other forms of tests for some things. 2. The key is what I said, create specifications from the perspective of an external "user" of the system you are testing. If that "user" is some code, that is fine, it is the outside perspective that is the important point. BDD works fine for VERY technical things. There are only 2 problems, 1) BDD is not the same as tools, so the ideas work everywhere, the tools may not be correct. I probably wouldn't use Gherkin for testing embedded devices. 2) You have to be even more disciplined as a team when dealing what technical things. The language of the problem domain, the language you should express your specs in, is now a technical one, so you must be laser focused on keeping "what" separate from "how". It is now much easier to slip into bad habits and start writing specs from the perspective of you the producers, rather than from a consumer of your system. Always abstract the interactions in your specs. Never write a spec that describes anything about "how" your system works.
    2
  1509. 2
  1510. 2
  1511. 2
  1512. 2
  1513. 2
  1514. 2
  1515. 2
  1516. 2
  1517. 2
  1518. 2
  1519. 2
  1520. 2
  1521. 2
  1522. 2
  1523. 2
  1524. 2
  1525. 2
  1526. 2
  1527. 2
  1528. 2
  1529. 2
  1530. 2
  1531. 2
  1532. 2
  1533. 2
  1534. 2
  1535. 2
  1536. 2
  1537. 2
  1538. 2
  1539. 2
  1540. 2
  1541. 2
  1542. 2
  1543. 2
  1544. 2
  1545. 2
  1546. 2
  1547. 2
  1548. 2
  1549. 2
  1550. 2
  1551. 2
  1552. 2
  1553. 2
  1554. 2
  1555. 2
  1556. 2
  1557. 2
  1558. 2
  1559. 2
  1560. 2
  1561. 2
  1562. 2
  1563. 2
  1564. 2
  1565. 2
  1566. 2
  1567. I think it is important to look to solutions as well as just point at problems, but some of your assertions are not really true. You describe one way of organising things at scale, this is not the only way, and it is not the best way. It is not how armies organise for example, they stopped doing that at the start of the 19th century, because orgs that worked that way got beaten by orgs that don't - the alternative is called "mission based planning", so you say "take that hill" not, "turn left at the cross roads, walk for 3 miles, shoot the 3 people on the left" and so on. Sure, accountability matters, but it is not most effectively done through hierarchy and bureaucracy, at least not in disciplines that demand creative thinking. Small teams of goal-oriented people, significantly outperform the alternatives. That is whey nearly all big, successful, SW companies are organised like this. You said "In large organizations, organizational abstraction is absolutely necessary as is specialisation of work in order to deal with the complexity" this is only partially true. The consequences of organisational abstraction and specialisation are bureaucracy and coupling in the org. Orgs like this are inefficient and scale poorly, there's maths that demonstrates this, based on work on "non-linear dynamics" from the Santa Fe Institute. A classical, hierarchically organised firm only increases profitability by 86% when it doubles in size. That's a measure of these overheads of bureaucracy and coupling. A more distributed approach to organisation, many small, more independent, teams (like Amazon for example) increases productivity (and profitability) by 115% when it doubles in size.
    2
  1568. 2
  1569. 2
  1570. 2
  1571. 2
  1572. 2
  1573. 2
  1574. 2
  1575. 2
  1576. 2
  1577. 2
  1578. 2
  1579. 2
  1580. 2
  1581. 2
  1582. 2
  1583. 2
  1584. 2
  1585. 2
  1586. 2
  1587. 2
  1588. 2
  1589. 2
  1590. 2
  1591. 2
  1592. 2
  1593. 2
  1594. 2
  1595. 2
  1596. 2
  1597. 2
  1598. 2
  1599. 2
  1600. 2
  1601. 2
  1602. 2
  1603. 2
  1604. 2
  1605. 2
  1606. 2
  1607. 2
  1608. 2
  1609. 2
  1610. 2
  1611. 2
  1612. 2
  1613. 2
  1614. 2
  1615. 2
  1616. 2
  1617. 2
  1618. 2
  1619. 2
  1620. 2
  1621. 2
  1622. 2
  1623. 2
  1624. 2
  1625. 2
  1626. 2
  1627. 2
  1628. 2
  1629. 2
  1630. 2
  1631. 2
  1632. 2
  1633.  @shelleyscloud3651  Sure, my point was that all of these things are irrelevant if we can't solve the first problem, but you are right we need to be planning for the upside as well as the down, and no jobs is the upside!! 😳 These things may be irrelevant but they do add to the complexity, but even so, if we have extinction on one side and "working to agree something with China" (or anyone else that is human) on the other, whatever the difficulties and problems with that, I think that the latter is the better choice, and what we should be working towards. The Chinese will be gone too, so it is in their interests too!  Of course, if we can live with AI, we MUST sort out the immense economic impact. Forgive me, but I don't think that worrying about JOBs is enough, though that is certainly a short term concern of immense importance. I think you and I are agreeing though, I just think that this is a MUCH bigger, more radical change than we are used to thinking about. Ultimately there will be no Jobs, because the successful picture of AI is that the cost of production, the cost of intelligence, will fall to zero. So no jobs at all as we currently understand them now. We will need to establish a different way of supporting people to live their lives, if this works it will presumably kill capitalism, and communism and most other -isms too 😳 I do agree that this is a HUGE topic, but real AI raises lots of HUGE topics, which is why I started posting here in the first place, I like Alister and Rory's take on politics, but my impression is that informed intelligent people like them, who aren't watching what is happening in AI closely, don't understand the magnitude of the challenge. Rory (sorry Rory) said "they will soon be able to write good essays", they may soon be able to do anything that people do better than people, former CEO of Google-X says within the next 2 years, and is advising people not to have children now. What we have built is machines that can learn anything, and they can learn annoying orders of magnitude faster than us!
    2
  1634. 2
  1635. 2
  1636. 2
  1637. 2
  1638. 2
  1639. 2
  1640. 2
  1641. 2
  1642. 2
  1643. 2
  1644. 2
  1645. 2
  1646. 2
  1647. 2
  1648. 2
  1649. 2
  1650. 2
  1651. 2
  1652. 2
  1653. 2
  1654. 2
  1655. 2
  1656. 2
  1657. 2
  1658. 2
  1659. 2
  1660. 2
  1661. 2
  1662. 2
  1663. 2
  1664. 2
  1665. 2
  1666. 2
  1667. 2
  1668. 2
  1669. 2
  1670. 2
  1671. 2
  1672. 2
  1673. 2
  1674. 2
  1675. 2
  1676. 2
  1677. 2
  1678. 2
  1679. 2
  1680. 2
  1681. 2
  1682. 2
  1683. 2
  1684. 2
  1685. 2
  1686. 2
  1687. 2
  1688. 2
  1689. 2
  1690. 2
  1691. 2
  1692. 2
  1693. 2
  1694. 2
  1695. 2
  1696. 2
  1697. 2
  1698. 2
  1699. 2
  1700. 2
  1701. 2
  1702. 2
  1703. 2
  1704. 2
  1705. 2
  1706. 2
  1707. 2
  1708. 2
  1709. 2
  1710. 2
  1711. 2
  1712. 2
  1713. 2
  1714. 2
  1715. 2
  1716. 2
  1717. 2
  1718. 2
  1719. 2
  1720. 2
  1721. 2
  1722. 2
  1723. 2
  1724. 2
  1725. 2
  1726. 2
  1727. My point is that I don't think that they should be decoupled at all, but they are often treated as separate pieces of work. I think that we should try to always find the real user need behind any change and work to achieve that. This doesn't mean that the technicalities are unimportant, it means that they are more clearly important, even to non-technical people. We technologists are the experts in this part of the problem, so it is overly naive, though common, for dev teams to defer all planning priorities to non-technical people. It is important that the technical work is prioritised appropriately, and that takes collaboration and negotiation between people that represent different perspectives on the system. I think that this is best done by focusing on what matter to users. This is neither front end nor back end, user stories or technical features. All of these things matter to users. So we organise and plan our work to deliver what our users want, and we add things that our expertise tells us they want even if they don't ask for it directly. Like security, resilience, maintainability and so on. All of these things are clearly, and importantly, in the users interest, but they may not have thought about them in those terms. It is part of our job as technologists to advise them in ways that prevents them from making dumb, naive, overly simplistic prioritisations. It is my view that surfacing "technical stories" for example doesn't help with this. A much better way is to always find, and express, the user value inherent in the technical things that we must do, or just take on the responsibility to do high quality work (from a technical perspective) and don't surface it or ask permission - technical improvements and enhancements are rolled-in to normal, everyday feature development - we don't ask for permission to do a good job!
    2
  1728. 2
  1729. 2
  1730. 2
  1731. 2
  1732. 2
  1733. 2
  1734. 2
  1735. 2
  1736. 2
  1737. 2
  1738. 2
  1739. 2
  1740. 2
  1741. 2
  1742. 2
  1743. 2
  1744. 2
  1745. 2
  1746. 2
  1747. 2
  1748. 2
  1749. 2
  1750. 2
  1751. 2
  1752. 2
  1753. 2
  1754. 2
  1755. 2
  1756. 1) My preference for code-review is pair programming, it is a better review than a regular code review, and a lot more, 2) First you need to know that it is broken, so good tests! Next, part of CI is to encourage small, frequent changes so each change will be smaller and simpler than you are used to, so easier to pull if it is a problem. CI works fast, so you will detect the problem more quickly than before, so your small, bad, change will be detected quickly, so there is less time for people to pile other changes on top of it. So yours may not always be the last change, but it won't usually be very far down the stack. So still easy to revert. In reality this is not usually a problem for these reasons. 3) It doesn't "naturally call for a code freeze" but it does add a bit more complexity. I would automate the "certification" process. I worked on creating a financial exchange, we did what we called "Continuous Compliance" our deployment pipeline automatically did everything that was needed to prepare for release, generated release notes, coordinated sign-offs, tracked the changes and so on. With a bit of ingenuity you can automate that stuff too. That may come later though. To start with, simply pick the newest release candidate that has passed all your tests, when you are ready to release, then do the slow, manual approval/certification paperwork in parallel with new development. If by "certification" you mean some form of manual approval testing, then you do want to work to eliminate that, it is too slow, too low quality and too expensive - watch some of my videos on BDD and Acceptance testing for some ideas on how to do that.
    2
  1757. 2
  1758. 2
  1759. 2
  1760. 2
  1761. 2
  1762. 2
  1763. 2
  1764. 2
  1765. 2
  1766. 2
  1767. 2
  1768. 2
  1769. 2
  1770. 2
  1771. 2
  1772. 2
  1773. 2
  1774. 2
  1775. 2
  1776. 2
  1777. 2
  1778. 2
  1779. 2
  1780. 2
  1781. 2
  1782. 2
  1783. 2
  1784. 2
  1785. 2
  1786. 2
  1787. 2
  1788. 2
  1789. 2
  1790. 2
  1791. 2
  1792. 2
  1793. 2
  1794. 2
  1795. 2
  1796. 2
  1797. 2
  1798. 2
  1799. 2
  1800. 2
  1801. 2
  1802. 2
  1803. 2
  1804. 2
  1805. 2
  1806. 2
  1807. 2
  1808. 2
  1809. 2
  1810. 2
  1811. 2
  1812. 2
  1813. 2
  1814. 2
  1815. 2
  1816. 2
  1817. 2
  1818. 2
  1819. 2
  1820. 2
  1821. 2
  1822. 2
  1823. 2
  1824. 2
  1825. 2
  1826. 2
  1827. 2
  1828. 2
  1829. 2
  1830. 2
  1831. 2
  1832. 2
  1833. 2
  1834. 2
  1835. 2
  1836. 2
  1837. 2
  1838. 2
  1839. 2
  1840. 2
  1841. 2
  1842. 2
  1843. 2
  1844. 2
  1845. 2
  1846. 2
  1847. 2
  1848. 2
  1849. 2
  1850. 2
  1851. 2
  1852. 2
  1853. 2
  1854. 2
  1855. 2
  1856. 2
  1857. 2
  1858. 2
  1859. 2
  1860. 2
  1861. 2
  1862. 2
  1863. 2
  1864. 2
  1865. 2
  1866. 2
  1867. 2
  1868. 2
  1869. 2
  1870. 2
  1871. 2
  1872. 2
  1873. 2
  1874. 2
  1875. 2
  1876. 2
  1877. 2
  1878. 2
  1879. 2
  1880. 2
  1881. 2
  1882. 2
  1883. 2
  1884. 2
  1885. 2
  1886. 2
  1887. 2
  1888. 2
  1889. 2
  1890. 2
  1891. 2
  1892. 2
  1893. 2
  1894. 2
  1895. 2
  1896. 2
  1897. 2
  1898. 2
  1899. 2
  1900. 2
  1901. 2
  1902. 2
  1903. 2
  1904. 2
  1905. 2
  1906. 2
  1907. 2
  1908. 2
  1909. 2
  1910. 2
  1911. 2
  1912. 2
  1913. 2
  1914. 2
  1915. 2
  1916. 2
  1917. 2
  1918. 2
  1919. 2
  1920. 2
  1921. 2
  1922. 2
  1923. 2
  1924. 2
  1925. 2
  1926. 2
  1927. 2
  1928. 2
  1929. 2
  1930. 2
  1931. 2
  1932. 2
  1933. 2
  1934. 2
  1935. 2
  1936. 2
  1937. 2
  1938. 2
  1939. 2
  1940. 2
  1941. 2
  1942. 2
  1943. 2
  1944. 2
  1945. 2
  1946. 2
  1947. 2
  1948. 2
  1949. 2
  1950. 2
  1951. 2
  1952. 2
  1953. 2
  1954. 2
  1955. 2
  1956. 2
  1957. 2
  1958. 2
  1959. 2
  1960. 2
  1961. 2
  1962. 2
  1963. 2
  1964. 2
  1965. 2
  1966. 2
  1967. 2
  1968. 2
  1969. 2
  1970. 2
  1971. 2
  1972. 2
  1973. 2
  1974. 2
  1975. 2
  1976. 2
  1977. 2
  1978. 2
  1979. 2
  1980. 2
  1981. 2
  1982. 2
  1983. 2
  1984. 2
  1985. 2
  1986. 2
  1987. 2
  1988. 2
  1989. 2
  1990. 2
  1991. 2
  1992. 2
  1993. 2
  1994. 2
  1995. 2
  1996. 2
  1997. 2
  1998. 2
  1999. 2
  2000. 2
  2001. 2
  2002. 2
  2003. 2
  2004. 2
  2005. 2
  2006. 2
  2007. 2
  2008. 2
  2009. 2
  2010. 2
  2011. 2
  2012. 2
  2013. 2
  2014. 2
  2015. 2
  2016. 2
  2017. 2
  2018. 2
  2019. 2
  2020. 2
  2021. 2
  2022. 2
  2023. 2
  2024. 2
  2025. 2
  2026. 2
  2027. 2
  2028. 2
  2029. 2
  2030. 2
  2031. 2
  2032. 2
  2033. 2
  2034. 2
  2035. 2
  2036. 2
  2037. 2
  2038. 2
  2039. 2
  2040. 2
  2041. 2
  2042. 2
  2043. 2
  2044. 2
  2045. 2
  2046. 2
  2047. 2
  2048. 2
  2049. 2
  2050. 2
  2051. 2
  2052. 2
  2053. 2
  2054. 2
  2055. 2
  2056. 2
  2057. 2
  2058. 2
  2059. 2
  2060. 2
  2061. 2
  2062. 2
  2063. 2
  2064. 2
  2065. 2
  2066. 2
  2067. 2
  2068. 2
  2069. 2
  2070. 2
  2071. 2
  2072. 2
  2073. 2
  2074. 2
  2075. 2
  2076. 2
  2077. 2
  2078. 2
  2079. 2
  2080. 2
  2081. 2
  2082. 2
  2083. 2
  2084. 2
  2085. 2
  2086. 2
  2087. 2
  2088. 2
  2089. 2
  2090. 2
  2091. 2
  2092. 2
  2093. 2
  2094. 2
  2095. 2
  2096. 2
  2097. 2
  2098. 2
  2099. 2
  2100. 2
  2101. 2
  2102. 2
  2103. 2
  2104. 2
  2105. 2
  2106. 2
  2107. 2
  2108. 2
  2109. 2
  2110. 2
  2111. 2
  2112. 2
  2113. 2
  2114. 2
  2115. 2
  2116. 2
  2117. 2
  2118. 2
  2119. 2
  2120. 2
  2121. 2
  2122. 2
  2123. 2
  2124. 2
  2125. 2
  2126. 2
  2127. 2
  2128. 2
  2129. 2
  2130. 2
  2131. 2
  2132. 2
  2133. 2
  2134. 2
  2135. 2
  2136. 2
  2137. 2
  2138. 2
  2139. 2
  2140. 2
  2141. 2
  2142. Interesting, that seems to me like a statement of fact. For Approval tests we run the code and save the result that we got back from the code and then in subsequent runs we compare the results with the original result. That is we refer to the original result which was generated by the code itself. How is this NOT self referential? Sure, you can mediate the self-reference by looking to see if it seems ok to you, but that doesn't stop it being self-referential, that just changes it to be self-referential-plus-sanity-check. This is not inherently bad, but it does place some limits on its value. The big problem with self-referential tests like these is that there is nothing, other than the sanity check, that says that the results that were generated make sense, or represent what you want, and certainly for many types of problem, the result that you get back from these bigger, chunkier bits of software is complex enough to make it VERY easy for the sanity check to be poor, cursory or to miss things. Humans are particularly bad at this kind of review. This is VERY different to creating a specification for what you want, before you create it, and then verifying that what you want is fulfilled by what you created. Particularly, if when you create the spec, you don't do it in a detailed "what is the precise output" form, which is what an Approval test validates. I am a big fan of Approval testing, but I don't think that they are a better replacement for BDD style Acceptance Testing. They may be easier to write, but I still think that comes at the cost of being a weaker, more coupled to the solution, assertion.
    2
  2143. 2
  2144. 2
  2145. 2
  2146. 2
  2147. 2
  2148. 2
  2149. 2
  2150. 2
  2151. 2
  2152. 2
  2153. 2
  2154. 2
  2155. 2
  2156. 2
  2157. 2
  2158. 2
  2159. 2
  2160. 2
  2161. 2
  2162. 2
  2163. 2
  2164. 2
  2165. 2
  2166. 2
  2167. 2
  2168. 2
  2169. 2
  2170. 2
  2171. 2
  2172. 2
  2173. 2
  2174. 2
  2175. 2
  2176. 2
  2177. 2
  2178. 2
  2179.  @TAiCkIne-TOrESIve  Well yes, it is. Let's say we have two processors, each working wholly independently of the other, concurrently. Now we share data between them. If we don't control access to this shared data, then the truth of the situation is that the data will change in uncontrolled, unpredictable ways. So we provide an illusion of synchronous access, through some kind of mechanism like a mutex, semaphore, lock, or most efficient of all compare-and-swap operation. All of these come at a big cost to performance to provide the illusion that these concurrent threads are working together. They aren't really, they are usually being sequenced in some way to preserve that illusion, but as I said the cost is enormous in terms of performance. By far the most efficient mechanism is compare-and-swap (or similar) if you benchmark this agains single threads doing work alone, not concurrently, it is around 300 times slower than doing the same work on a single thread (or CPU). Locks and mutexs are MUCH worse than that. So it isn't even really synchronous, it is only sequential. The abstraction leaks heavily in terms of time because for a large part of the time, the whole system is stalled doing nothing much beyond trying to synchronise the steps between the CPUs or threads. Sync has it's uses, of course, but I do think that it is a leaky abstraction, that happens to be sometime useful. You can solve a lot of difficult problems, in other aspects of your system, by not over-using Sync as a model.
    2
  2180. 2
  2181. 2
  2182. 2
  2183. 2
  2184. 2
  2185. 2
  2186. 2
  2187. 2
  2188. 2
  2189. 2
  2190. 2
  2191. 2
  2192. 2
  2193. 2
  2194. 2
  2195. 2
  2196. 2
  2197. 2
  2198. 2
  2199. 2
  2200. 2
  2201. 2
  2202. 2
  2203. 2
  2204. 2
  2205. 2
  2206. 2
  2207. 2
  2208. 2
  2209. 2
  2210. 2
  2211. Actually, unless there is an admin mistake, which happens some times, I always provide the references, just look at the description to the video. In this case the data is from the State of DevOps research, and is also reported in the books "DevOps Handbook" and "Accelerate". As I describe in this video, I think that there is a difference between being dogmatic, and ruling out bad ideas. Do you disagree with my scores against the PRs, can you show a similar argument that demonstrates how FBs are better and outperform CI. I don't think that this is about personal opinion and personal preference. I don't think that that is enough, this is about engineering and what works better. This doesn't mean that you can't build good SW with FBs & PRs, I have never said that anywhere. It means that it is harder, considerably harder, to build good SW that way, and that is for some very good reasons, not my opinions, but because FB development is based on a bigger bet, that things will be ok at the point of merge. CI is more pessimistic I guess, and so doesn't trust your guess that your FB will be fine and will integrate perfectly with everyone else's. So instead, we check that each small change integrates, and the data says that works better. The reason that I explained that I think that I am "not dogmatic" was not because it worries me that people may think of me that way, it was rhetorical, so that I could explain what is better than dogma. That isn't accepting all viewpoints as valid, you are allowed to have your own opinions of course, but I am also allowed to disagree with them, and you with mine. I think that the ways in which we choose to express our disagreement matter quite a lot. I don't call you or think of you as dumb because you disagree with me, but I do think that people are being dumb if they are being dogmatic. I have tried FBs, PRs, Waterfall dev, Pairing and not pairing and so on. So when I express my opinion it is based on personal experience and as I explain in this video, what I think of as a reasoned approach to understanding what I learn. If someone hasn't tried true CI and TBD, or pairing, and dismisses it, which of us is being dogmatic? Thank you for watching, sorry if you decide to go.
    2
  2212. 2
  2213. 2
  2214. 2
  2215. I think that is a team decision, what makes sense in your context. I think that there is a danger of relying too heavily on the documentation in tools like JIRA. It is certainly useful, but it is not really the "truth of the system". Stuff in JIRA could say one thing, and the code could do something different all together. I think that is what you are describing, the potential for JIRA to be out of step with the system in production. If I am working in an environment where that matters, in a regulated environment for example, then I prefer to get a more accurate "documentation" of what is really in production than I can get from human notes in JIRA. I use an approach to testing where we create Acceptance Tests as "Executable Specifications" for the system. Every change is tested, every change has an Exec. Spec. which both tests, and documents the behaviour of the system. This way it is not possible to release a change that mis-matches the spec, because if it didn't meet the spec a test would fail and so reject the release. If you could do all of this quickly enough, say all your tests ran in under 1 hour, at that point I start to wonder about the real value of Feature-Flags rather than just changing the code, and documenting that change in the tests. But that is probably taking this idea beyond the scope of this video. If you would like to explore a bit more in what I am talking about with the Exec Specs, take a look at these videos: How to write Acceptance Tests: https://youtu.be/JDD5EEJgpHU Acceptance Testing with Executable Specifications: https://youtu.be/knB4jBafR_M
    2
  2216. 2
  2217. 2
  2218. 2
  2219. 2
  2220. 2
  2221. 2
  2222. 2
  2223. 2
  2224. 2
  2225. 2
  2226. 2
  2227. 2
  2228. 2
  2229. 2
  2230. 2
  2231. 2
  2232. 2
  2233. 2
  2234. 2
  2235. 2
  2236. 2
  2237. 2
  2238. 2
  2239. 2
  2240. 2
  2241. 2
  2242. 2
  2243. 2
  2244. 2
  2245. 2
  2246. 2
  2247. 2
  2248. 2
  2249. 2
  2250. 2
  2251. 2
  2252. 2
  2253. 2
  2254. 2
  2255. 2
  2256. 2
  2257. 2
  2258. 2
  2259. 2
  2260. 2
  2261. 2
  2262. 2
  2263. 2
  2264. 2
  2265. 2
  2266. 2
  2267. 2
  2268. 2
  2269. 2
  2270. 2
  2271. 2
  2272. 2
  2273. 2
  2274. 2
  2275. 2
  2276. 2
  2277. 2
  2278. 2
  2279. 2
  2280. 2
  2281. 2
  2282. 2
  2283. 2
  2284. 2
  2285. 2
  2286. 2
  2287. 2
  2288. 2
  2289. 2
  2290. 2
  2291. 2
  2292. 2
  2293. 2
  2294. 2
  2295. 2
  2296. 2
  2297. 2
  2298. 2
  2299. 2
  2300. 2
  2301. 2
  2302. 2
  2303. 2
  2304. 2
  2305. 2
  2306. 2
  2307. 2
  2308. 2
  2309. Not specifically no. It is generically about "Software development" and so covers game development in the same way that it covers any other kind of dev. It is a book about, what I believe, is an approach that improves your chances of doing a better job whatever the nature of your product. My experience has been that everyone thinks that their form of dev is a special case. Game dev has some specific challenges, but they are different in scale, not in kind or in principle. One of the more difficult parts of the approach that I describe is how to test the code that you create at the point where it touches the real world, through a UI for example. This is not really any different for a game than for anything else, except that the UI is so rich in a game. But the behaviour underneath the pixels is all completely testable, and then you can test the rendering of that model into pixels. This is only the same problem as testing a UI anywhere else. I have written games this way, though not any commercial games for a very long time. So it is certainly possible, but it will be hard to do well for some things. IMO this is mostly about how much value you think that there is in this approach. I think that it is so important, so valuable, as an approach that I will work hard to make my system, whatever it is, testable. Even if it has a rich, complex, UI as a part of it. Last commercial game I wrote was a financial trading game, some people may not think of it as a game, but it was really, it had rich real-ish-time graphs that you interacted with to predict where prices in markets would go. We did that as a full CD development, testing every aspect. As part of that we architected our SW so that we could test nearly all of it in isolation from the pixel-painting of the graphics, but then did some generic testing of the pixel painting. This was certainly NOT a AAA game, but there was nothing in principle that was different. This is what I would do if I was building a AAA game!
    2
  2310. 2
  2311. 2
  2312. I think that we are 90+% aligned here. A few thoughts... If your "typical cycle (between branching and merging is less than half a day)" then you are doing CI, so I have no argument with that at all. I think that you are doing more typing than me, I work on master locally and then merge all my local commits immediately that I have made them to origin/master. So I don't have to create any branches or merge to them (less typing for same effect that you describe). To be honest, this doesn't matter at all, if you want to type a bit more it doesn't matter. "The higher the threshold of the quality gate, the more frequent the checks are going to be red" Not so! The slower the feedback loop the more often the checks will be red! Adding branches slows the feedback loop. The problem, as you correctly point out, is the efficiency of the feedback loop. One of the more subtle effects of feature-branching, in my experience, is that it gives teams more room to slip into bad habits. Inefficient tests being one of the most important ones. If everyone is working on Trunk, then slow tests are a pain, so the team keeps them fast. I think that most of the rest of what you are describing is about feedback efficiency. Let's try a thought-experiment... If you could get the answer (is it releasable) in 1 minute, would you bother branching? If not, then what we are talking about then is where is the threshold where the efficiency gains of hiding change (branching) outweigh the efficiency gains of exposing change (CI). I think that CI wins hands-down as long as you can get a definitive answer within a working day. Practically, the shorter the feedback cycle the better, but my experience is that under 1 hour is the sweet-spot. That gives you lots of chances to correct any mistake during the same working day. I built one of the world's highest performance financial exchanges and the Point of Sale System for one of the UK's biggest retailers, and we could evaluate the whole system in under 1 hour. So my preference is to spend time on optimising builds and tests, rather than branching.
    2
  2313. 2
  2314. 2
  2315. 2
  2316. 2
  2317. 2
  2318. 2
  2319. 2
  2320. 2
  2321. 2
  2322. 2
  2323. 2
  2324. 2
  2325. 2
  2326. 2
  2327. 2
  2328. 2
  2329. 2
  2330. 2
  2331. 2
  2332. 2
  2333. 2
  2334. 2
  2335. 2
  2336. 2
  2337. 2
  2338. 2
  2339. 2
  2340. 2
  2341. 2
  2342. 2
  2343. 2
  2344. 2
  2345. 2
  2346. 2
  2347. 2
  2348. 2
  2349. 2
  2350. 2
  2351. 2
  2352. 2
  2353. 2
  2354. 2
  2355. 2
  2356. 2
  2357. 2
  2358. 2
  2359. 2
  2360. 2
  2361. 2
  2362. 2
  2363. 2
  2364. 2
  2365. 2
  2366. 2
  2367. 2
  2368. 2
  2369. 2
  2370. 2
  2371. 2
  2372. 2
  2373. 2
  2374. 2
  2375. 2
  2376. 2
  2377. 2
  2378. 2
  2379. 2
  2380. 2
  2381. 2
  2382. 2
  2383. 2
  2384. 2
  2385. 2
  2386. 2
  2387. 2
  2388. 2
  2389. 2
  2390. 2
  2391. 2
  2392. 2
  2393. 2
  2394. 2
  2395. 2
  2396. 2
  2397. 2
  2398. 2
  2399. 2
  2400. 2
  2401. 2
  2402.  @brownhorsesoftware3605  Here's a quote from the OO page on Wikipedia: "Terminology invoking "objects" and "oriented" in the modern sense of object-oriented programming made its first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes);[3][4] Alan Kay later cited a detailed understanding of LISP internals as a strong influence on his thinking in 1966.[5] I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful). Alan Kay, [5] Another early MIT example was Sketchpad created by Ivan Sutherland in 1960–1961; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction.[6] Also, an MIT ALGOL version, AED-0, established a direct link between data structures ("plexes", in that dialect) and procedures, prefiguring what were later termed "messages", "methods", and "member functions".[7][8] Simula introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding.[9] The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports.[9]" There were lots of steps, and it was Alan Kay that really pulled the threads together, but he was building on lots of prior work that got parts of the picture. As I understand it SIMULA was a kind of DSL for encoding simulation systems, it wasn't really a general purpose language, at least not really used as one.
    2
  2403. 2
  2404. 2
  2405. 2
  2406. 2
  2407. 2
  2408. 2
  2409. 2
  2410. 2
  2411. 2
  2412. 2
  2413. 2
  2414. 2
  2415. 2
  2416. 2
  2417. 2
  2418. 2
  2419. 2
  2420. 2
  2421. 2
  2422. 2
  2423. 2
  2424. 2
  2425. 2
  2426. 2
  2427. 2
  2428. 2
  2429. 2
  2430. 2
  2431. 2
  2432. 2
  2433. 2
  2434. 2
  2435. 2
  2436. 2
  2437. 2
  2438. 2
  2439. 2
  2440. 2
  2441. 2
  2442. 2
  2443. 2
  2444. 2
  2445. 2
  2446. 2
  2447. 2
  2448. 2
  2449. 2
  2450. 2
  2451. 2
  2452. 2
  2453. 2
  2454. 2
  2455. 2
  2456. 2
  2457. 2
  2458. 2
  2459. 2
  2460. 2
  2461. 2
  2462. 2
  2463. 2
  2464. 2
  2465. 2
  2466. 2
  2467. 2
  2468. 2
  2469. 2
  2470. 2
  2471. 2
  2472. 2
  2473. 2
  2474. 2
  2475. 2
  2476. 2
  2477. 2
  2478. 2
  2479. 2
  2480. 2
  2481. 2
  2482. 2
  2483. 2
  2484. 2
  2485. 2
  2486. 2
  2487. 2
  2488. 2
  2489. 2
  2490. 2
  2491. 2
  2492. 2
  2493. 2
  2494. 2
  2495. 2
  2496. 2
  2497. 2
  2498. 2
  2499. 2
  2500. 2
  2501. 2
  2502. 2
  2503. 2
  2504. 2
  2505. 2
  2506. 2
  2507. 2
  2508. 1
  2509. 1
  2510. 1
  2511. 1
  2512. 1
  2513. 1
  2514. 1
  2515. 1
  2516. 1
  2517. 1
  2518. 1
  2519. 1
  2520. 1
  2521. 1
  2522. 1
  2523. 1
  2524. 1
  2525. 1
  2526. 1
  2527. 1
  2528. 1
  2529. 1
  2530. Thanks for the feedback. I am aware that there is a danger of me lumping a whole category of systems together here. I am pleased to hear that your product does better than most. That is kind of my point though, the danger with low-code systems, and particularly their marketing, is that they are sold as some magic bullet that speeds the development process, when really they MAY only speed the coding. My experience of building SW systems is that if the coding is the hard bit you are doing it wrong, but that is how these things are sold. If low code systems can genuinely raise the level of abstraction, so that they help us to think of the problems in better, more efficient ways (like a spreadsheet or SQL does) then fine, and certainly sometimes they do, but that blurry line between "simple enough" and "arrrgggh there's an iceberg ahead" is very difficult to spot. In regular SW, at least when it is done well (and there are lots of assumptions in that statement) then we approach, even things that we think may be simple, more defensively. If your low-code systems allows me to incrementally discover the problem and "grow" my solution to it, then great. If I can make a mistake a spot it in minutes (so you probably need unit testing, not just testing), well before I get anywhere near production, then even better. If you can develop it and deploy from a deployment pipeline, which I'd consider that table stakes, but highly unusual in low-code environments, then fantastic - I have no argument, and if I have a problem that fits in your niche, I'd sign-up.
    1
  2531. 1
  2532. 1
  2533. 1
  2534. 1
  2535. 1
  2536. 1
  2537. 1
  2538. 1
  2539. 1
  2540. 1
  2541. 1
  2542. 1
  2543. 1
  2544. 1
  2545. 1
  2546. 1
  2547. 1
  2548. 1
  2549. 1
  2550. 1
  2551. 1
  2552. 1
  2553. 1
  2554. 1
  2555. 1
  2556. 1
  2557. 1
  2558. 1
  2559. 1
  2560. 1
  2561. 1
  2562. 1
  2563. 1
  2564. 1
  2565. 1
  2566. 1
  2567. 1
  2568. 1
  2569. 1
  2570. 1
  2571. 1
  2572. 1
  2573. 1
  2574. 1
  2575. 1
  2576. 1
  2577. 1
  2578. 1
  2579. 1
  2580. 1
  2581. 1
  2582. 1
  2583. 1
  2584. 1
  2585. 1
  2586. 1
  2587. 1
  2588. 1
  2589. 1
  2590. 1
  2591. 1
  2592. 1
  2593. 1
  2594. 1
  2595. 1
  2596. 1
  2597. 1
  2598. 1
  2599. 1
  2600. 1
  2601. 1
  2602. 1
  2603. 1
  2604. 1
  2605. 1
  2606. 1
  2607. 1
  2608. 1
  2609. 1
  2610. 1
  2611. 1
  2612. 1
  2613. 1
  2614. 1
  2615. 1
  2616. 1
  2617. 1
  2618. 1
  2619. 1
  2620. 1
  2621. 1
  2622. 1
  2623. 1
  2624. The problem is, how do you measure success? If you have not seen the alternative, then what I describe may sound "idealistic" and not "available to normal teams". This isn't true. This is practiced around the world in VERY successful teams of all sizes. I didn't say "CI is out the window if you can't do it in 15 minutes", but read the definition of CI. Not my words... Wikipedia "In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day." Marin Fowler "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily" From the C2 Wiki which defined it: "What if engineers didn't hold on to modules for more than a moment? What if they made their (correct) change, and presto! everyone's computer instantly had that version of the module" ALL CI experts describe CI as a process where we evaluate changes "AT LEAST once per day". Once per day is not really good enough, but will do. I use 15 minutes as a realistic example, that is how I work and have done for 20 ish years. So not a stcrawman, a real, working approach. The data (read the Accelerate book) says that anything that compromises the "at least once per day" produces software of lower quality more slowly. If you have never seen what good SW dev looks like, you may not recognise this in your own work, but that is what the data says. You are free to disagree, but I am afraid that you are the one with the strawman in this case.
    1
  2625. 1
  2626. 1
  2627. 1
  2628. 1
  2629. 1
  2630. 1
  2631. 1
  2632. 1
  2633. 1
  2634. 1
  2635. 1
  2636. 1
  2637. 1
  2638. 1
  2639. Well as I believe described by one of Wittgenstein's contributions, philosophy isn't science.  Science is science, and science is based, in part, on being skeptical about everything - questioning things and that is what I was referring to.  "Science is the belief in the ignorance of experts" - Richard Feynman I am neither a philosopher nor a scientist, though I know a lot more about science than I do about philosophy. I aspire to be an engineer, and apply scientific (not philosophic) style reasoning to solving practical problems in software. I can only approach the world from my own perspective, I am rarely quoting other people on this channel, and when I am I try to make sure that I say so. These are my ideas and describe my approach. I think of it as applying the skeptical mind to ideas, questioning everything. I don't mean this in the, to me, rather dry terms of philosophy, I mean it in the more practical terms of science. I try hard to find the weakness in ideas, including my own, as a way to improving my understanding of things - so this is what I mean by "I question pretty much everything". I am a software developer, and I try my best to understand problems and how to solve them. One of the most common failings, not just in software, is the temptation to fall back on dogma, and received wisdom. So I think I do question everything, in that sense, ao thank you for sharing my videos with your students and I hope that they will question ideas in the same way that I assume we would both recommend.
    1
  2640. 1
  2641. 1
  2642. 1
  2643. 1
  2644. 1
  2645. 1
  2646. 1
  2647. 1
  2648. 1
  2649. 1
  2650. 1
  2651. 1
  2652. 1
  2653. 1
  2654. 1
  2655. 1
  2656. 1
  2657. 1
  2658. 1
  2659. 1
  2660. 1
  2661. 1
  2662. 1
  2663. 1
  2664. 1
  2665. 1
  2666. 1
  2667. 1
  2668. 1
  2669. 1
  2670. 1
  2671. 1
  2672. 1
  2673. 1
  2674. 1
  2675. 1
  2676. 1
  2677. 1
  2678. 1
  2679. 1
  2680. 1
  2681. 1
  2682. 1
  2683. 1
  2684. 1
  2685. 1
  2686. 1
  2687. I argue that modern engineering is really the practical arm of science. We apply scientific style reasoning to solving practical problems. It is a fundamental of modern science, since Karl Popper in the 1930's that science does not proceed by "proving correctness" it proceeds by "Falsifying mistaken explanations". That is what I mean by "start by assuming we are wrong". Some assumptions are built on a stronger footing than others, but NONE are certain. If asked to write a C++ program, that could be a stupid choice, and you find that out later. At which point you would need to change. So good SW dev minimises the impact of the bigger risks. For example, while I don't start new projects assuming that my compiler is broken, I have in the past found bugs in compilers, but I don't start the project assuming that there will be. If there is a bug in my system, I don't assume that it is because of bugs in the compiler, I start assuming that the bugs are mine, because that is more likely, but after I have "Falsified" that theory, I move on to other possible explanations, and eventually, after I have ruled out lots of other things, the next theory to try out is that my compiler is wrong. So starting off assuming that our assumptions are wrong, is really saying that the the safest course, as an engineering, is to work on your best theory "my compiler works" until you have something that makes you think it doesn't. But don't start by assuming that it can never be wrong. A scientist assumes quantum theory, or general relativity is probably correct, but the key word here is "probably" because they are certain that neither of them are in fact correct, even though each has passed every experimental test ever. I will work on the assumption that my compiler is correct, but in fact I am certain that it isn't, but that most of the time, the bugs that are in it are so esoteric that they won't affect me.
    1
  2688.  @kennethgee2004  I guess that we are going to disagree on the philosophy of science stuff then. 😉 I think that your view, and Edison's, is not well aligned with modern scientific thinking.  My preferred way of thinking is probably captured in David Deutsche's "The Beginnings of Infinity", where he describes science as striving for "Good Explanations", and defines fairly precisely what makes an explanation "good". This "As a scientist, one should not assume that assume that quantum theory is true, as there has been no evidence for it." is simply factually incorrect. There is lots of evidence for it, in fact, as I said, it has stood up to EVERY experimental test that has been applied to it so far. Without quantum theory, electronics doesn't work. You can carry out a quantum experiment with a few dollars of hardware:  https://youtu.be/kKdaRJ3vAmA?si=xi0ZiKQk_B4eb0ef  https://spookyactionbook.com/category/diyquantum/ The point of science, is NOT to assume that you are right, but based on your best theory of the reality of the situation, create an explanation, and then show where it is wrong. The assumptions that you describe are wrong, the SOLID principles are a useful, kind of folk description that can help, but they aren't very rigorous, and are open to criticism: https://youtu.be/tMW08JkFrBA?si=VdgFv7JOZU_flqFI This doesn't mean that they are useless, and neither does saying that "start off assuming you are wrong" mean that you reject ideas without evidence. My point is that you look for the evidence. Believing things by rote, is not engineering, and is not science based. As Richard Feynman said "Science is a satisfactory philosophy of doubt". I recommend doubt, not automatic rejection.
    1
  2689. 1
  2690. 1
  2691. 1
  2692. 1
  2693. 1
  2694.  @ITConsultancyUK  Well lots of companies like yours would disagree. SpaceX (not defence but similar), USAF are currently using continuous delivery and high levels of automated testing for fighter jets. Tesla for cars. The difference is that you have to build the testing into the development process. Sure, people may be cheaper if you do it after the fact, but this isn't how it works for examples like I have given. In the cases you design the system to be testable from day one. I was involved in building one of the world's highest performance financial exchanges, we ran roughly 100,000 test cases every 30-40 minutes. No army of people can match that coverage. Google run 104,000 test cases per minute.  I have helped manufacturers implement these techniques for medical devices, scientific instruments, computer systems, chip manufacturers, cars, the list goes on. So we aren't talking about "toy websites" here, these are complex, regulated, safety-critical systems. What I am trying to describe here is a genuine engineering approach for SW dev in these contexts. Sure, you can never test 100%, whatever your strategy, but automated testing is always going to be orders of magnitude more cases than manual testing, unless you do it really poorly.  Tesla recently released a significant design change to the charging of their Model 3 car. It was a software change, test-driven, using ONLY automated tests to validate the change. The change went live in under 3 hours, and after that the (software driven) Tesla production line was producing cars with a re-designed charging mechanism that changed the max charge-rate from 200Kw to 250Kw. That would be simply impossible if it relied on manual testing. I think that humans have no place in regression testing, so I am afraid that we will have to disagree on this.
    1
  2695. 1
  2696. 1
  2697. 1
  2698. 1
  2699. 1
  2700. 1
  2701. 1
  2702. 1
  2703. 1
  2704. 1
  2705. 1
  2706. 1
  2707. 1
  2708. 1
  2709. 1
  2710. 1
  2711. 1
  2712. 1
  2713. 1
  2714. 1
  2715. 1
  2716. 1
  2717. 1
  2718. 1
  2719. 1
  2720. 1
  2721. 1
  2722. 1
  2723. 1
  2724. 1
  2725. 1
  2726. 1
  2727. 1
  2728. 1
  2729. 1
  2730. 1
  2731. 1
  2732. 1
  2733. 1
  2734. 1
  2735. 1
  2736. 1
  2737. 1
  2738. 1
  2739. 1
  2740. 1
  2741. 1
  2742. 1
  2743. 1
  2744. 1
  2745. 1
  2746. 1
  2747. 1
  2748. 1
  2749. 1
  2750. 1
  2751. 1
  2752. 1
  2753. 1
  2754. 1
  2755. 1
  2756. 1
  2757. 1
  2758. 1
  2759. 1
  2760. 1
  2761. 1
  2762. 1
  2763. 1
  2764. 1
  2765. 1
  2766. 1
  2767. 1
  2768. 1
  2769. 1
  2770. 1
  2771. 1
  2772. 1
  2773. 1
  2774. 1
  2775. 1
  2776. 1
  2777. 1
  2778. 1
  2779. 1
  2780. 1
  2781. 1
  2782. 1
  2783. 1
  2784. 1
  2785. 1
  2786. 1
  2787. 1
  2788. 1
  2789. 1
  2790. 1
  2791. 1
  2792. 1
  2793. 1
  2794. 1
  2795. 1
  2796. 1
  2797. 1
  2798. 1
  2799. 1
  2800. 1
  2801. 1
  2802. 1
  2803. 1
  2804. 1
  2805. 1
  2806. 1
  2807. 1
  2808. 1
  2809. 1
  2810. 1
  2811. 1
  2812. 1
  2813. 1
  2814. 1
  2815. 1
  2816. 1
  2817. 1
  2818. 1
  2819. 1
  2820. 1
  2821. 1
  2822. 1
  2823. 1
  2824. 1
  2825. 1
  2826. 1
  2827. 1
  2828. 1
  2829. 1
  2830. 1
  2831. 1
  2832. 1
  2833. 1
  2834. 1
  2835. My preference is to maximise the amount of code that I can test easily with unit testing, so that interactions with external inputs and outputs is isolated, abstracted and minimised in terms of code and complexity. Then you deal with these "edges" differently. I prefer to reinforce my unit-testing with what I call automated "Acceptance Testing" this evaluates high-level scenarios, through the UI where applicable, but doesn't aim to exhaustively test the UI. The logic behind it though is fully tested with unit testing. Depending on the nature of the system, this approach combined with good exploratory (manual) testing is enough. I have some friends who do done some other stuff around automating the validation of specific UI states, effectively taking a snapshot, under automated test-control, and verifying the results in the next test run - approval testing for UIs. The clever bit is in automating the handling of failures. My friend Gojko Adzic has written some tools (open source) that will show you a "before and after" comparison of the snapshot when a test fails, you click on the one that is correct and the test remembers for future runs. So if the change made the test fail, and you agree the test should fail, it stops the release, if you think the difference is acceptable, the test tools "remember" the new picture and use that in future. In general I am suspicious of trying to be too precise in testing UIs because they change all the time. Gojko's testing is probably as good as you can get, but it still needs human support to check releasability. For most systems, I don't think that you need that much precision, so a more behavioural approach to testing works fine, backed up by manual exploratory testing to just verify that stuff still "looks ok".
    1
  2836. 1
  2837. 1
  2838. 1
  2839. 1
  2840. 1
  2841. 1
  2842. 1
  2843. 1
  2844. 1
  2845. 1
  2846. 1
  2847. 1
  2848. 1
  2849. 1
  2850. 1
  2851. 1
  2852. 1
  2853. 1
  2854. 1
  2855. 1
  2856. 1
  2857. 1
  2858. 1
  2859. 1
  2860. 1
  2861. 1
  2862. 1
  2863. 1
  2864. 1
  2865. 1
  2866. 1
  2867. 1
  2868. 1
  2869. 1
  2870. 1
  2871. 1
  2872. 1
  2873. I am afraid that it is more complex than that. It isn't "the job of PMs and analysts to produce requirements". Professional SW dev is nearly always a team exercise, and while it may be true that most requirements may come from PMs and analysts in your org, if you treat those as some kind of "perfect truth" your SW won't be very good. The problem is that SW is complex, and as we start working on it we learn new things all the time. Small things like "it would look nicer if the button was blue" and big things like "this completely invalidates our assumptions for how this stuff works". This is inevitable and constant. If your team doesn't allow for that kind of constant, incremental learning, then you can't be doing a great job. No human being, PM or not, has perfect foresight and their guesses about requirements will always be wrong at some level, just as yours or mine would be. Great teams recognise this, and organise to allow for learning to happen all the time, and allow themselves the room to profit from it when it happens. If you wait for your PM to give you permission to refactor your code into a better shape, when you learn something new that tells you what that better shape should be, then you are doing yourself, and them, a dis-service. That is how you maintain you code as a good place to work. If the devs on your team don't see mistakes or omissions in the "requirements" from the PM or analysts, then they don't understand the problem well enough. New requirements can, and should, come from devs, QAs, ops people, anyone! Team work is more than people working in different boxes next to each other. It is the goal-keeper's job to stop the ball going into the goal, but if the striker is on the line when the ball comes, he doesn't say "not my job" he kicks the ball clear.
    1
  2874. 1
  2875. 1
  2876. 1
  2877. 1
  2878. 1
  2879. 1
  2880. 1
  2881. 1
  2882. 1
  2883. 1
  2884. 1
  2885. 1
  2886. 1
  2887. 1
  2888. 1
  2889. 1
  2890. 1
  2891. 1
  2892. 1
  2893. 1
  2894. 1
  2895. 1
  2896. 1
  2897. 1
  2898. I think that wether or not "we practice CI" is true, depends. Do your feature branches for bugs or features last for less than a day? If not then I don't think that you are practicing CI - sorry! ...and that is really one of the points of this video. Until EVERYONE"s changes are merged together in a version of the code that you expect to be deployed into production, they aren't integrated. That's the only point at which we can definitively answer the question - "is my code ready to release". That is what CI gives us. So if CI matters, and the data says it does, then we need to do whatever that takes to achieve it, including challenging things that make us hide any changes, anywhere, for longer tham a day. Your point on the scalability of pair programming, is different. I think that they way that you build trust in your colleagues is to help your colleagues to grow. That is not only about seniors teaching juniors. Juniors learn from each other too. My preferred approach is to do pair programming, but to regularly rotate the pairs so that everyone on the team gets to pair with everyone else on the team regularly. I usually to prefer to rotate pairs every day. This spreads learning of all kinds, and is one of the best strategies for improving developer's, of all skill levels, that I know of. Let juniors pair, but have someone, just sanity check their work - not every change, but just enough to spot big mistakes to misunderstandings. I know that this sounds like a PR or a code review, but it is not really a gate, as such, merely a check-in on the progress of juniors. We used to not let people completely new to the codebase loose with other newbies until they had paired with more experienced colleagues for a while, only then would we let juniors pair with juniors.
    1
  2899. 1
  2900. 1
  2901. 1
  2902. 1
  2903. 1
  2904. 1
  2905. 1
  2906. 1
  2907. 1
  2908. 1
  2909. 1
  2910. 1
  2911. 1
  2912. 1
  2913. 1
  2914. 1
  2915. 1
  2916. 1
  2917. 1
  2918. 1
  2919. 1
  2920. 1
  2921. 1
  2922. 1
  2923. 1
  2924. 1
  2925. 1
  2926. 1
  2927. 1
  2928. 1
  2929. 1
  2930. 1
  2931. 1
  2932. 1
  2933. 1
  2934. 1
  2935. 1
  2936. 1
  2937. 1
  2938. 1
  2939. 1
  2940. 1
  2941. 1
  2942. 1
  2943. 1
  2944. 1
  2945. 1
  2946. 1
  2947. 1
  2948. 1
  2949. 1
  2950. 1
  2951. 1
  2952. 1
  2953. 1
  2954. 1
  2955. 1
  2956. 1
  2957. 1
  2958. 1
  2959. 1
  2960. 1
  2961. 1
  2962. 1
  2963. 1
  2964. 1
  2965. 1
  2966. 1
  2967. 1
  2968. 1
  2969. 1
  2970. 1
  2971. 1
  2972. 1
  2973. 1
  2974. 1
  2975. 1
  2976. 1
  2977. 1
  2978. 1
  2979. 1
  2980. 1
  2981. 1
  2982. 1
  2983. 1
  2984. 1
  2985. 1
  2986. 1
  2987. 1
  2988. 1
  2989. 1
  2990. 1
  2991. 1
  2992. 1
  2993. 1
  2994. 1
  2995. 1
  2996. 1
  2997. 1
  2998. 1
  2999. 1
  3000. 1
  3001. 1
  3002. 1
  3003. 1
  3004. 1
  3005. 1
  3006. 1
  3007. 1
  3008. 1
  3009. 1
  3010. 1
  3011. 1
  3012. 1
  3013. 1
  3014. 1
  3015. 1
  3016. 1
  3017. 1
  3018. 1
  3019. 1
  3020. 1
  3021. 1
  3022. 1
  3023. 1
  3024. 1
  3025. 1
  3026. 1
  3027. 1
  3028. 1
  3029. 1
  3030. 1
  3031. 1
  3032. 1
  3033. 1
  3034. 1
  3035. 1
  3036. 1
  3037. 1
  3038. 1
  3039. 1
  3040. 1
  3041. 1
  3042. 1
  3043. 1
  3044. 1
  3045. 1
  3046. 1
  3047. 1
  3048. 1
  3049. 1
  3050. 1
  3051. 1
  3052. 1
  3053. 1
  3054. 1
  3055. 1
  3056. 1
  3057. 1
  3058. 1
  3059. 1
  3060. 1
  3061. 1
  3062. 1
  3063. 1
  3064. I am certainly biased, Continuous Delivery is more than what most people think of when they think of agile development. I care less about the words used though, and more about what works. CD is how Amazon, Google, Netflix, Facebook and many many more of the big web-shops work. It is how Tesla build the software for their cars and how Volvo make their trucks. It is how Siemens build software that runs in their machines in hospitals, and is the approach behind some of the highest performance systems on the planet were built, as well as the biggest and most scalable systems on the planet. Ericsson, one of the leading suppliers of 5G infrastructure, uses this approach to roll out 5G across the planet. So saying "Sorry, saying simple agile processes work in these larger complex systems is not always correct" is not correct! This is your best chance of success. There is no guarantee in anything, people doing dumb things, or working on bad ideas can always fail, so I can agree with "not always correct" but only in the sense that you have a dramatically better chance of success with the approach that I describe than any other that we know about so far. The evidence is there, this works and it works at immense scale and it works better than any other approach that we know of. This is the kind of impact that I would expect, if we were to achieve a genuine "engineering approach to software". I think that we have found that, and while agile was a good start, and Scum is a bit if a diversion, it needs more than stand-up meetings and people called "Scrum-master" to count as "Agile" or even more importantly to my mind "Engineering".
    1
  3065. 1
  3066. 1
  3067. 1
  3068. 1
  3069. 1
  3070. 1
  3071. 1
  3072. 1
  3073. 1
  3074. 1
  3075. 1
  3076. 1
  3077. 1
  3078. 1
  3079. 1
  3080. 1
  3081. 1
  3082. 1
  3083. 1
  3084. 1
  3085. 1
  3086. 1
  3087. 1
  3088. 1
  3089. 1
  3090. 1
  3091. 1
  3092. 1
  3093. 1
  3094. 1
  3095. 1
  3096. 1
  3097. 1
  3098. 1
  3099. 1
  3100. 1
  3101. 1
  3102. 1
  3103. 1
  3104. 1
  3105. 1
  3106. 1
  3107. 1
  3108. 1
  3109. 1
  3110. 1
  3111. 1
  3112. 1
  3113. 1
  3114. 1
  3115. 1
  3116. 1
  3117. 1
  3118. 1
  3119. 1
  3120. 1
  3121. 1
  3122. 1
  3123. 1
  3124. 1
  3125. 1
  3126. 1
  3127. 1
  3128. 1
  3129. 1
  3130. 1
  3131. 1
  3132. 1
  3133. 1
  3134. 1
  3135. 1
  3136. 1
  3137. 1
  3138. 1
  3139. 1
  3140. 1
  3141. 1
  3142. 1
  3143. 1
  3144. 1
  3145. 1
  3146. 1
  3147. 1
  3148. 1
  3149. 1
  3150. 1
  3151. 1
  3152. 1
  3153. 1
  3154. 1
  3155. 1
  3156. 1
  3157. 1
  3158. 1
  3159. 1
  3160. 1
  3161. 1
  3162. 1
  3163. 1
  3164. 1
  3165. 1
  3166. 1
  3167. 1
  3168. 1
  3169. 1
  3170. 1
  3171. 1
  3172. 1
  3173. 1
  3174. 1
  3175. 1
  3176. 1
  3177. 1
  3178. 1
  3179. 1
  3180. 1
  3181. 1
  3182. 1
  3183. 1
  3184. 1
  3185. 1
  3186. 1
  3187. 1
  3188. 1
  3189. 1
  3190. 1
  3191. 1
  3192. 1
  3193. 1
  3194. 1
  3195. 1
  3196. 1
  3197. 1
  3198. 1
  3199. 1
  3200. 1
  3201. 1
  3202. 1
  3203. 1
  3204. 1
  3205. 1
  3206. 1
  3207. 1
  3208. 1
  3209. 1
  3210. 1
  3211. 1
  3212. 1
  3213. 1
  3214. 1
  3215. 1
  3216. 1
  3217. 1
  3218. 1
  3219. 1
  3220. 1
  3221. 1
  3222. 1
  3223. 1
  3224. 1
  3225. 1
  3226. 1
  3227. 1
  3228. 1
  3229. 1
  3230. 1
  3231. 1
  3232. 1
  3233. 1
  3234. 1
  3235. 1
  3236. 1
  3237. 1
  3238. 1
  3239. 1
  3240. 1
  3241. 1
  3242. 1
  3243. 1
  3244. 1
  3245. 1
  3246. 1
  3247. 1
  3248. 1
  3249. 1
  3250. 1
  3251. 1
  3252. 1
  3253. 1
  3254. 1
  3255. 1
  3256. 1
  3257. 1
  3258. 1
  3259. 1
  3260. 1
  3261. 1
  3262. 1
  3263. 1
  3264. 1
  3265. 1
  3266. 1
  3267. 1
  3268. 1
  3269. 1
  3270. 1
  3271. 1
  3272. 1
  3273. 1
  3274. 1
  3275. 1
  3276. 1
  3277. 1
  3278. 1
  3279. 1
  3280. 1
  3281. 1
  3282. 1
  3283. 1
  3284. 1
  3285. 1
  3286. 1
  3287. 1
  3288. 1
  3289. 1
  3290. 1
  3291. 1
  3292. 1
  3293. 1
  3294. 1
  3295. 1
  3296. 1
  3297. 1
  3298. 1
  3299. 1
  3300. 1
  3301. 1
  3302. 1
  3303. 1
  3304. 1
  3305. 1
  3306. 1
  3307. 1
  3308. 1
  3309. 1
  3310. 1
  3311. 1
  3312. 1
  3313. 1
  3314. 1
  3315. 1
  3316. 1
  3317. 1
  3318. 1
  3319. 1
  3320. 1
  3321. 1
  3322. 1
  3323. 1
  3324. 1
  3325. 1
  3326. 1
  3327. 1
  3328. 1
  3329. 1
  3330. 1
  3331. 1
  3332. 1
  3333. 1
  3334. 1
  3335. 1
  3336. 1
  3337. 1
  3338. 1
  3339. 1
  3340. 1
  3341. 1
  3342. 1
  3343. 1
  3344. 1
  3345. 1
  3346. 1
  3347. 1
  3348. 1
  3349. 1
  3350. 1
  3351. 1
  3352. 1
  3353. 1
  3354. 1
  3355. 1
  3356. 1
  3357. 1
  3358. 1
  3359. 1
  3360. 1
  3361. 1
  3362. 1
  3363. 1
  3364. 1
  3365. 1
  3366. 1
  3367. 1
  3368. 1
  3369. 1
  3370. 1
  3371. 1
  3372. 1
  3373. 1
  3374. 1
  3375. 1
  3376. 1
  3377. 1
  3378. 1
  3379. 1
  3380. 1
  3381. 1
  3382. 1
  3383. 1
  3384. 1
  3385. 1
  3386. 1
  3387. 1
  3388. 1
  3389. 1
  3390. 1
  3391. 1
  3392. 1
  3393. 1
  3394. 1
  3395. 1
  3396. 1
  3397. 1
  3398. 1
  3399. 1
  3400. 1
  3401. 1
  3402. 1
  3403. 1
  3404. 1
  3405. 1
  3406. 1
  3407. 1
  3408. 1
  3409. 1
  3410. 1
  3411. 1
  3412. 1
  3413. 1
  3414. 1
  3415. 1
  3416. 1
  3417. 1
  3418.  @cronnosli  CD is about "working so that our software is always in a releasable state" that doesn't necessarily mean pushing to production every few seconds. My teamI built one of the world's highest performance financial exchanges, the authorities rather frown on "half an exchange" with other people's money in it. It was 6 months before we released anything, but we practiced CD from day one. At every stage, our software was of a quality and tested enough and audited enough and deployable enough to be released from day one. So you do that. How do you measure it? Use the DORA metrics, you will score poorly on "Deployment frequency" but as long as you "could" deploy, I would count that for now, and that will get you on the right road. The "intrinsic value" is in multiple dimensions, but when you are in the mode of not being able to release frequently, for whatever reason, then working so that you a capable of releasing your software after every small change is a higher-quality way of working, and stores up less trouble for when you do want to release. It is measurably (see DORA again) more efficient and more effective. As you develop the ability to do this HOWEVER COMPLEX YOUR SYSTEM you get better at it until ultimately you can release more often, if you choose to, even if that isn't your primary goal. This means that whatever the change, you can get to the point of release optimally, because your SW is always in a releasable state, and the only way that you can achieve that, is to MAKE IT EASY TO MAKE RELEASABLE SW! On your last example, no, it doesn't mean "this 1,5 month of work has no value because it wasn't fast enough?" but if you had been practicing CD, it would probably have been easier for you to find the problem, because your code would have been better tested and so easier to automate when you were bug hunting. This isn't a guarantee, but more a statistical probability. Let's be absolutely clear about this, CD works perfectly well, actually better than that CD works BEST, even for VERY complex software and hardware systems. SpaceX haven't flown their Starship recently, but the software is still developed with CD.
    1
  3419. 1
  3420. 1
  3421. 1
  3422. 1
  3423. 1
  3424. 1
  3425. 1
  3426. 1
  3427. 1
  3428. 1
  3429. 1
  3430. 1
  3431. 1
  3432. 1
  3433. 1
  3434. 1
  3435. 1
  3436. 1
  3437. 1
  3438. 1
  3439. 1
  3440. 1
  3441. 1
  3442. 1
  3443. 1
  3444. 1
  3445. 1
  3446. 1
  3447. 1
  3448. 1
  3449. 1
  3450. 1
  3451. 1
  3452. 1
  3453. I am not really in a position to gauge the current job market for web developers, and certainly not in New York. There are certainly different skills that help with web development, but they are the aesthetic skills of what is nice to use, rather than a big difference in technical skills. In general I think it is good for any programmer to have experience of several different languages and approaches, it broadens your perspective and you get to see more clearly what you like and dislike in each. Javascript and C# are closely related, which may be while you like C#, there is enough new stuff to learn there though so nothing wrong with learning C#. Python is very popular and a nice language to program in, then there are a functional languages, where there is more new stuff to learn. In general, I don't think that programming as a job is going out of fashion. It is as secure as any other job, and a lot more secure than most. Programming is creative enough that it will be a long time before machines can replace people, and when they are good enough to do most of the programming, nearly every job will be at risk. Wether web programming is a better, or worse, bet than back-end programming, I think that there is demand for good programmers in both. The best advice I can offer, is work to be a good programmer, work on the important programming skills that will be transferable, whatever you work on - modularity, cohesion, good separation of concerns, loose-coupling, abstraction. Develop skills in design and TDD, I think they will help you stand out from people who only know language syntax - which is not enough, alone, to do a good job.
    1
  3454. 1
  3455. 1
  3456. 1
  3457. 1
  3458. 1
  3459. 1
  3460. 1
  3461. 1
  3462. 1
  3463. 1
  3464. 1
  3465. 1
  3466. 1
  3467. 1
  3468. 1
  3469. 1
  3470. 1
  3471. 1
  3472. 1
  3473. 1
  3474. 1
  3475. 1
  3476. 1
  3477. 1
  3478. 1
  3479. 1
  3480. 1
  3481. 1
  3482. 1
  3483. 1
  3484. 1
  3485. 1
  3486. 1
  3487. 1
  3488. 1
  3489. 1
  3490. 1
  3491. 1
  3492. 1
  3493. 1
  3494. 1
  3495. 1
  3496. 1
  3497. 1
  3498. 1
  3499. 1
  3500. 1
  3501. 1
  3502. 1
  3503. 1
  3504. 1
  3505. 1
  3506. 1
  3507. 1
  3508. 1
  3509. 1
  3510. 1
  3511. 1
  3512. 1
  3513. 1
  3514. 1
  3515. 1
  3516. 1
  3517. 1
  3518. 1
  3519. 1
  3520. 1
  3521. 1
  3522. 1
  3523. 1
  3524. 1
  3525. 1
  3526. 1
  3527. 1
  3528. 1
  3529. 1
  3530. 1
  3531. 1
  3532. 1
  3533. 1
  3534. 1
  3535. 1
  3536. 1
  3537. 1
  3538. 1
  3539. 1
  3540. 1
  3541. 1
  3542. 1
  3543. 1
  3544. 1
  3545. 1
  3546. 1
  3547. 1
  3548. 1
  3549. 1
  3550. 1
  3551. 1
  3552. 1
  3553. 1
  3554. 1
  3555. 1
  3556. 1
  3557. 1
  3558. 1
  3559. 1
  3560. 1
  3561. 1
  3562. 1
  3563. 1
  3564. 1
  3565. 1
  3566. 1
  3567. 1
  3568. 1
  3569. 1
  3570. 1
  3571. 1
  3572. 1
  3573. 1
  3574. 1
  3575. 1
  3576. 1
  3577. 1
  3578. 1
  3579. 1
  3580. 1
  3581. 1
  3582. 1
  3583. 1
  3584. 1
  3585. 1
  3586. 1
  3587. 1
  3588. 1
  3589. 1
  3590. 1
  3591. 1
  3592. 1
  3593. 1
  3594. 1
  3595. 1
  3596. 1
  3597. 1
  3598. 1
  3599. 1
  3600. 1
  3601. 1
  3602. 1
  3603. 1
  3604. 1
  3605. 1
  3606. 1
  3607. 1
  3608. 1
  3609. 1
  3610. 1
  3611. 1
  3612. 1
  3613. 1
  3614. 1
  3615. 1
  3616. 1
  3617. 1
  3618. 1
  3619. 1
  3620. 1
  3621. 1
  3622. 1
  3623. 1
  3624. 1
  3625. 1
  3626. 1
  3627. 1
  3628. 1
  3629. 1
  3630. 1
  3631. 1
  3632. 1
  3633. 1
  3634. 1
  3635. 1
  3636. 1
  3637. 1
  3638. 1
  3639. 1
  3640. 1
  3641. 1
  3642. 1
  3643. 1
  3644. 1
  3645. 1
  3646. 1
  3647. 1
  3648. 1
  3649. 1
  3650. 1
  3651. 1
  3652. 1
  3653. 1
  3654. 1
  3655. 1
  3656. 1
  3657. 1
  3658. 1
  3659. 1
  3660. 1
  3661. 1
  3662. 1
  3663. 1
  3664. 1
  3665. 1
  3666. 1
  3667. 1
  3668. 1
  3669. 1
  3670. 1
  3671. 1
  3672. 1
  3673. 1
  3674. 1
  3675. 1
  3676. 1
  3677. 1
  3678. 1
  3679. 1
  3680. 1
  3681. 1
  3682. 1
  3683. 1
  3684. 1
  3685. 1
  3686. 1
  3687. 1
  3688. 1
  3689. 1
  3690. 1
  3691. 1
  3692. 1
  3693. 1
  3694. 1
  3695. 1
  3696. 1
  3697. 1
  3698. 1
  3699. 1
  3700. 1
  3701. 1
  3702. 1
  3703. 1
  3704. 1
  3705. 1
  3706. 1
  3707. 1
  3708. 1
  3709. 1
  3710. 1
  3711. 1
  3712. 1
  3713. 1
  3714. 1
  3715. 1
  3716. 1
  3717. 1
  3718. 1
  3719. 1
  3720. 1
  3721. 1
  3722. 1
  3723. 1
  3724. 1
  3725. 1
  3726. 1
  3727. 1
  3728. 1
  3729. 1
  3730. 1
  3731. 1
  3732. 1
  3733. 1
  3734. 1
  3735. 1
  3736. 1
  3737. 1
  3738. 1
  3739. 1
  3740. 1
  3741. 1
  3742. 1
  3743. 1
  3744. 1
  3745. 1
  3746. 1
  3747. Sure, and retrofitting this stuff is more difficult that starting from scratch, but while I agree with the general aim of your comments, I disagree with some of your conclusions. Sure, retro-fitting TDD to an existing codebase is more difficult, so organise things so that new work is possible with TDD, but don't retro-fit to code that you aren't changing.  "CI you need to have reliable unit tests" well yes, but if you have 1 reliable test that stops you making a common mistake, that's better than none. Pair programming is a choice, it is not difficult to adopt if people want to do it, so start discussing the reasons why the team might like to try it. I agree that some of this stuff is difficult to change, but it is not impossible, in fact I make a living help companies and teams do that, we almost never get to start from a blank-sheet. Step 1 in solving any problem is identifying that there is a problem, step 2 is coming up with something that may address the problem 3 is trying it out to see if you can make it work. Here I certainly try and help people with steps 1 & 2, the trouble with step 3 is that it is a bit more contextual, but there are plenty of videos here that try to tackle step 3. Checkout my stuff on refactoring, or acceptance testing. The only bit that I disagree with is the assumption behind "250% more effort is hard to sell" - So don't! Don't structure this as "250% more effort" find small changes that you can make that don't really add more effort, they just change where you apply the effort. Make the code that you are working on now, today, a little better. Write new code with tests, and do the work to isolate that new work from the big-balls-of-mud elsewhere so that you can. I think that you get to, what I concede can look like some fantasy Nirvana, by many small, practical steps, not by huge stop-the-world-efforts.
    1
  3748. 1
  3749. 1
  3750. 1
  3751. 1
  3752. 1
  3753. 1
  3754. 1
  3755. 1
  3756. 1
  3757. 1
  3758. 1
  3759. 1
  3760. 1
  3761. 1
  3762. 1
  3763. 1
  3764. 1
  3765. 1
  3766. 1
  3767. 1
  3768. 1
  3769. 1
  3770. 1
  3771. 1
  3772. 1
  3773. 1
  3774. 1
  3775. 1
  3776. 1
  3777. 1
  3778. 1
  3779. 1
  3780. 1
  3781. 1
  3782. 1
  3783. 1
  3784. 1
  3785. 1
  3786. 1
  3787. 1
  3788. 1
  3789. 1
  3790. 1
  3791. 1
  3792. 1
  3793. 1
  3794. 1
  3795. 1
  3796. 1
  3797. 1
  3798. 1
  3799. 1
  3800. 1
  3801.  @damianmortimer2082  Software is strange stuff, and things that seem obvious in physical engineering, aren't always so obvious in software. Plans matter to the degree that you need a sense of direction. Who is doing what on what day matters less. I agree with you re-planning is vital as soon as the circumstances change. So you need to optimise so that you can see the circumstances change as quickly as you can. and then react and change your plan. That is how people work when situations are fluid. Software development is always an exercise in learning, so as we learn new things we need to change our plan. If we were constructing a building, what would you think if I created the foundations for it, but decided that I would leave it till later to decided if my foundations could support the whole building? I would be irresponsible. The trouble is that Software is so flexible that it is easy to make this kind of mistake. It is also so variable that there is no strong agreement on what "able to support the building" means in any given context. A building is unlikely to start out being planned as a 5 story structure, and then unexpectedly, based only on its popularity end up needing 5,000 floors. This happens in software! It is difficult to predict how the plan will fail, but it will always fail. So we need to be smarter and find ways to protect our assumptions. We build something that will work, given our assumptions and limit it to that. Perhaps we build a game that works great on a PS4, and ensure that it works great on a PS4 at every step in its development. The other big difference in SW is that it is maleable, we can change it at any time, if we adopt some engineering discipline, that means that we can grow it over time. So start with something that works well, and then enhance it.
    1
  3802. 1
  3803. 1
  3804. 1
  3805. 1
  3806. 1
  3807. 1
  3808. 1
  3809. 1
  3810. 1
  3811. 1
  3812. 1
  3813. 1
  3814. 1
  3815. 1
  3816. 1
  3817. 1
  3818. 1
  3819. 1
  3820. 1
  3821. 1
  3822. 1
  3823. 1
  3824. 1
  3825. 1
  3826. 1
  3827. 1
  3828. 1
  3829. 1
  3830. 1
  3831. 1
  3832. 1
  3833. 1
  3834. 1
  3835. 1
  3836. 1
  3837. 1
  3838. 1
  3839. 1
  3840. 1
  3841. 1
  3842. 1
  3843. 1
  3844. 1
  3845. 1
  3846. 1
  3847. 1
  3848. 1
  3849. 1
  3850. 1
  3851. 1
  3852. 1
  3853. 1
  3854. 1
  3855. 1
  3856. 1
  3857. 1
  3858. 1
  3859. 1
  3860. 1
  3861. 1
  3862. 1
  3863. 1
  3864. 1
  3865. 1
  3866. 1
  3867. 1
  3868. 1
  3869. 1
  3870. 1
  3871. 1
  3872. 1
  3873. 1
  3874. 1
  3875. 1
  3876. 1
  3877. 1
  3878. 1
  3879. 1
  3880. 1
  3881. 1
  3882. 1
  3883. 1
  3884. 1
  3885. 1
  3886. 1
  3887. 1
  3888. 1
  3889. 1
  3890. 1
  3891. 1
  3892. 1
  3893. 1
  3894. 1
  3895. 1
  3896. 1
  3897. 1
  3898. 1
  3899. 1
  3900. 1
  3901. 1
  3902. 1
  3903. 1
  3904. 1
  3905. 1
  3906. 1
  3907. 1
  3908. 1
  3909. 1
  3910. 1
  3911. 1
  3912. 1
  3913. 1
  3914. 1
  3915. 1
  3916. 1
  3917. 1
  3918. 1
  3919. 1
  3920. 1
  3921. 1
  3922. 1
  3923. 1
  3924. 1
  3925. 1
  3926. 1
  3927. 1
  3928. 1
  3929. 1
  3930. 1
  3931. 1
  3932. 1
  3933. 1
  3934. 1
  3935. 1
  3936. 1
  3937. 1
  3938. 1
  3939. 1
  3940. 1
  3941. 1
  3942. 1
  3943. 1
  3944. 1
  3945. 1
  3946. 1
  3947. 1
  3948. 1
  3949. 1
  3950. 1
  3951. 1
  3952. 1
  3953. 1
  3954. 1
  3955. 1
  3956. 1
  3957. 1
  3958. 1
  3959. 1
  3960. 1
  3961. 1
  3962. 1
  3963. 1
  3964. 1
  3965. 1
  3966. 1
  3967. 1
  3968. 1
  3969. 1
  3970. 1
  3971. 1
  3972. 1
  3973. 1
  3974. 1
  3975. 1
  3976. 1
  3977. 1
  3978. 1
  3979. 1
  3980. 1
  3981. 1
  3982. 1
  3983. 1
  3984. 1
  3985. 1
  3986. 1
  3987. 1
  3988. 1
  3989. 1
  3990. 1
  3991. 1
  3992. 1
  3993. 1
  3994. 1
  3995. 1
  3996. 1
  3997.  @ajibolaoki5064  I can't speak for other orgs, but I have never set any age limits. I recognise that it may not look like it from your position, but in general there is a skill-shortage in software development. So it should be a seller-market. The trouble is that people doing the hiring, aren't always great at it, and so to make their lives easier they often tend to "go by the numbers" of looking at experience and counting skills on your resume. I, and many others, think that this is dumb, but it is how it often works. So you either have to find a way to improve your skills, and I really don't recommend telling lies! Or you seek out orgs that think a little differently. Orgs that are looking for the right person rather than the right list of skills on a Resume. Increase you skills by writing more code! Find an open-source project and contribute, work on something that interests you, build your own stuff - play with code and do silly things. All of these will make you stand out a bit more from others. In looking for orgs that are a bit more people-focussed, you can often make an initial guess based on job adverts, it doesn't always work, but it may be a good starting point. If the job is mainly just a list of skills and experience, not a great sign. If the job talks more about the problem that they are trying to solve and/or the type of people that they want or the type of team that they have - a better sign. In general, smaller teams are, IMO, a better starting point than big orgs. I hope that some of this is helpful, and I wish you luck.
    1
  3998. 1
  3999. 1
  4000. 1
  4001. 1
  4002. 1
  4003. 1
  4004. 1
  4005. 1
  4006. 1
  4007. 1
  4008. 1
  4009. 1
  4010. 1
  4011. 1
  4012. 1
  4013. 1
  4014. 1
  4015. 1
  4016. 1
  4017. 1
  4018. 1
  4019. 1
  4020. 1
  4021. 1
  4022. 1
  4023. 1
  4024. 1
  4025. 1
  4026. 1
  4027. 1
  4028. 1
  4029. 1
  4030. 1
  4031. 1
  4032. 1
  4033. 1
  4034. 1
  4035. 1
  4036. 1
  4037. 1
  4038. 1
  4039. 1
  4040. 1
  4041. 1
  4042. 1
  4043. 1
  4044. 1
  4045. 1
  4046. 1
  4047. 1
  4048. 1
  4049. 1
  4050. 1
  4051. 1
  4052. 1
  4053. 1
  4054. 1
  4055. 1
  4056. 1
  4057. 1
  4058. 1
  4059. 1
  4060. 1
  4061. 1
  4062. 1
  4063. 1
  4064. 1
  4065. 1
  4066. 1
  4067. 1
  4068. 1
  4069. 1
  4070. 1
  4071. 1
  4072. 1
  4073. 1
  4074. 1
  4075. 1
  4076. 1
  4077. 1
  4078. 1
  4079. 1
  4080. 1
  4081. 1
  4082. 1
  4083. 1
  4084. 1
  4085. 1
  4086. 1
  4087. 1
  4088. 1
  4089. 1
  4090. 1
  4091. 1
  4092. 1
  4093. 1
  4094. 1
  4095. 1
  4096. 1
  4097. 1
  4098. 1
  4099. 1
  4100. 1
  4101. 1
  4102. 1
  4103. 1
  4104. 1
  4105. 1
  4106. 1
  4107. 1
  4108. 1
  4109. 1
  4110. 1
  4111. 1
  4112. 1
  4113. 1
  4114. 1
  4115. 1
  4116. 1
  4117. 1
  4118. 1
  4119. 1
  4120. 1
  4121. 1
  4122. 1
  4123. 1
  4124. 1
  4125. 1
  4126. 1
  4127. 1
  4128. 1
  4129. 1
  4130. 1
  4131. 1
  4132. 1
  4133. 1
  4134. 1
  4135. 1
  4136. 1
  4137. 1
  4138. 1
  4139. 1
  4140. 1
  4141. 1
  4142. 1
  4143. 1
  4144.  @brandonpearman9218  It's a problem. If I entered the Tennis championship at Wimbledon, without being good at tennis, and someone said "He's not skilled enough" no one would conclude that that meant the the Tennis commentator was saying that there was no way to criticise Tennis. Of course there are ways to critique TDD, it is more complex at the edges of a system, it is bad as a tool for retrofitting to pre-existing code, without a lot of hard work, and considerable skill, but I think that yours is a straw man argument. TDD does take some skill, people at the start, however good they are at SW dev, take a little time to learn it. It is MUCH easier to learn by people with very good design skills, because the reason that TDD is difficult is because it exposes you more quickly and more clearly to the consequences of your design choices than any other approach. I did recognise where the quote of "TDD induced design damage" came from. I followed the discussion between Martin Fowler, Kent Beck and DHH pretty closely when it was released some years ago. At the time, I told my friends that I thought that Martin and Kent were being too kind, too equivocal, in the discussion. They didn't challenge some of the DHH's ideas that I thought specious. I don't agree with DHH, I have seen people doing a bad job of TDD and that resulted in poor code, but that has been unusual in the teams that I have worked with, where the reverse is much more commonly the case. I have been involved in teams that built, literally, award winning software, some of which is VERY widely in use around the world as part of the infrastructure of several common frameworks and tools and are widely regarded as examples of VERY GOOD DESIGN. Part of the problem as I see it, is that TDD is a different way to design, so if you are experienced, and maybe good, at SW design it is a big deal to change your working habits, but even then, if you do, it works better in my opinion and experience.
    1
  4145. 1
  4146. 1
  4147. 1
  4148. 1
  4149. 1
  4150. 1
  4151. 1
  4152. 1
  4153. 1
  4154. 1
  4155. 1
  4156. 1
  4157. 1
  4158. 1
  4159. 1
  4160. 1
  4161. 1
  4162. 1
  4163. 1
  4164. 1
  4165. Allan, my first thought is that this is a nice problem to have! What you are saying is that your throughput is so good that it is essentially background noise and that your stability is so good that you have few enough defects that they are individual events rather than useful, statistical signals. Congratulations! This is a great example of what is possible when you take this approach seriously! I have worked in teams that were similarly close to optimal for those teams. These days I spend most of my time working on teams that are on the journey to this kind of destination, rather than having arrived. The difficulty that I have in offering any suggestions is that I think it is VERY contextual from now on. You are already beyond the generic stuff. So please treat these suggestions as just thoughts, I may miss the mark! One of the BIG variables here is the complexity, or otherwise, of your pipeline. If 'releasability' for you involves various, multi-stage evaluations, Commit, Acceptance, Performance, Security, Data Migration, etc, etc, then you could think of using 'Throughput' & 'Stability' measures as technical measures between stages. "How often are bugs found in 'Commit' and how long to recover?". "How long in Acceptance?". That gives you more fine-grained data and can be useful in optimising the pipeline, and individual stages. Where do you think are areas for improvement? Throughput & Stability are each made up of two measures, bug rates may be low, but MTTR may still be useful? How about streching the measure of Throughput from "Lead Time" to "Cycle Time" harder to measure, but includes a bit more of the human/cultural aspects? Finally, in this state, you are probably so close to optimal for your team that it doesn't matter so much, you can play with ideas, continue to track T & S but only to make sure that you don't make them way worse. If the team like the change, or if some more business focused metric can be used "gained more customers" or "increased market share" maybe those metrics are where your focus should be now? Again congratulations to you and your team, this is a nice story to hear. I hope that some of these ideas may offer food for thought. Dave
    1
  4166. 1
  4167. 1
  4168. 1
  4169. 1
  4170. Ok, what do you think there is to stop an "uncontrollable intelligence expansion"? AIs are already advancing a lot fast than humans are. Compare the performance of an AI from 2 years ago with one from today. This pace is accelerating quite dramatically at the moment, what and where is the pressure to stop it? AI researchers used to think that there were several "walls" that would slow progress toward the singularity. Now, many of those "walls" have fallen. One of the last is "multimodality" that is being able to do lots of different things.  We already have AI that meets this criteria of multimodality, though still not always at the levels of performance of the best human in all aspects. I disagree with your point about "evidence" we are in a game where every time there is an advance in AI we move the goal posts. It used to be thought that an AI could never beat a human at Chess. It has been a very long time since the best human could beat the best AI at chess. So we said, "ah, chess is easier than we thought". Now computers are better at analysing medical scans, folding proteins, finding obscure case law, playing games, writing (at least in terms of speed), translation, drawing and controlling machinery. There may be some barrier, but I see no obvious evidence for where it is. AI can now learn to play a game on its own and beat people at it, a recent AI Go champion (a more complex game than chess) was never programmed with the rules of the game, but it learned the game from playing it (millions of times, in minutes) and then it was better than a person. We used to say an AI couldn't be a doctor or a lawyer, but AIs have passed Bar exams and the qualifying exams to be a doctor. They are still not good enough to do those jobs, but they are already better than people at many tasks. People researching the social impact of AI say that in 5 years, AI will be able to do 1/2 of all jobs, and the jobs that are easiest for AI to replace people in are the jobs that are most highly paid. Bill Gates says there will be no programming jobs in 5 years time.  So I am not sure what counts as evidence that this can't happen. At the moment this *is happening*, so unless we can see what stops it, why is it sensible to assume that it won't happen?
    1
  4171. 1
  4172. 1
  4173. 1
  4174. 1
  4175. 1
  4176. 1
  4177. 1
  4178. 1
  4179. 1
  4180. 1
  4181. 1
  4182. 1
  4183. 1
  4184. 1
  4185. 1
  4186. 1
  4187. 1
  4188. 1
  4189. 1
  4190. 1
  4191. 1
  4192. 1
  4193. 1
  4194. 1
  4195. 1
  4196. 1
  4197. 1
  4198. 1
  4199. 1
  4200. 1
  4201. 1
  4202. 1
  4203. 1
  4204. 1
  4205. 1
  4206. 1
  4207. 1
  4208. 1
  4209. 1
  4210. 1
  4211. 1
  4212. 1
  4213. 1
  4214. 1
  4215. 1
  4216. 1
  4217. 1
  4218. 1
  4219. 1
  4220. 1
  4221. 1
  4222. 1
  4223. 1
  4224. 1
  4225. 1
  4226. 1
  4227. 1
  4228. 1
  4229. 1
  4230. 1
  4231. 1
  4232. 1
  4233. 1
  4234. 1
  4235. 1
  4236. 1
  4237. 1
  4238. 1
  4239. 1
  4240. 1
  4241. 1
  4242. 1
  4243. 1
  4244. 1
  4245. 1
  4246. 1
  4247. 1
  4248. 1
  4249. 1
  4250. 1
  4251. 1
  4252. 1
  4253. 1
  4254. 1
  4255. 1
  4256. 1
  4257. 1
  4258. 1
  4259. 1
  4260. 1
  4261. 1
  4262. 1
  4263. 1
  4264. 1
  4265. 1
  4266. 1
  4267. 1
  4268. 1
  4269. 1
  4270. 1
  4271. 1
  4272. 1
  4273. 1
  4274. 1
  4275. 1
  4276. 1
  4277. 1
  4278. 1
  4279. 1
  4280. 1
  4281. 1
  4282. 1
  4283. 1
  4284. 1
  4285. 1
  4286. 1
  4287. 1
  4288. 1
  4289. 1
  4290. 1
  4291. 1
  4292. 1
  4293. 1
  4294. 1
  4295. 1
  4296. 1
  4297. 1
  4298. 1
  4299. 1
  4300. 1
  4301. 1
  4302. 1
  4303. 1
  4304. 1
  4305. 1
  4306. 1
  4307. 1
  4308. 1
  4309. 1
  4310. 1
  4311. 1
  4312. 1
  4313. 1
  4314. 1
  4315. 1
  4316. 1
  4317. 1
  4318. 1
  4319. 1
  4320. 1
  4321. 1
  4322. 1
  4323. 1
  4324. 1
  4325. 1
  4326. 1
  4327. 1
  4328. 1
  4329. 1
  4330. 1
  4331. 1
  4332. 1
  4333. 1
  4334. 1
  4335. 1
  4336. 1
  4337. 1
  4338. 1
  4339. 1
  4340. 1
  4341. 1
  4342. 1
  4343. 1
  4344. 1
  4345. 1
  4346. 1
  4347. 1
  4348. 1
  4349. 1
  4350. 1
  4351. 1
  4352. 1
  4353. 1
  4354. 1
  4355. 1
  4356. 1
  4357. 1
  4358. 1
  4359. 1
  4360. 1
  4361. 1
  4362. 1
  4363. 1
  4364. 1
  4365. 1
  4366. 1
  4367. 1
  4368. 1
  4369. 1
  4370. 1
  4371. 1
  4372. 1
  4373. 1
  4374. 1
  4375. 1
  4376. 1
  4377. 1
  4378. 1
  4379. 1
  4380. 1
  4381. 1
  4382. 1
  4383. 1
  4384. 1
  4385. 1
  4386. 1
  4387. 1
  4388. 1
  4389. 1
  4390. 1
  4391. 1
  4392. 1
  4393. 1
  4394. 1
  4395. 1
  4396. 1
  4397. 1
  4398. 1
  4399. 1
  4400. 1
  4401. 1
  4402. 1
  4403. 1
  4404. 1
  4405. 1
  4406. 1
  4407. 1
  4408. 1
  4409. 1
  4410. 1
  4411. 1
  4412. 1
  4413. 1
  4414. 1
  4415. 1
  4416. 1
  4417. 1
  4418. 1
  4419. 1
  4420. 1
  4421. 1
  4422. 1
  4423. 1
  4424. 1
  4425. 1
  4426. 1
  4427. 1
  4428. 1
  4429. 1
  4430. 1
  4431. 1
  4432. 1
  4433. 1
  4434. 1
  4435. 1
  4436. 1
  4437. 1
  4438. 1
  4439. 1
  4440. 1
  4441. That's rather a silly statement when you don't know me. Just because you disagree with my ideas doesn't mean that I have never written any code. My guess is that I have probably written a lot more code than you, because I am probably a lot older than you - just based on statistics in our industry, but I have no means of knowing how much code you have written.  I have built most kinds of code that you can think of. I used to work for PC manufacturers and wrote os extensions, additions to the BIOS and device drivers. I wrote simple games for early home computers and later got interested graphics programming general and built a ray-tracing animation system from scratch - this was before there were such things as graphics co-processors, or even graphics libraries. I got interested in distributed systems and wrote what we would now call a data mesh, a platform for micro-service-like systems in 1990. Later I worked on big commercial systems, and worked at a company that built some of the very early commercial systems on the web. I took over the lead of one of the world's biggest Agile projects when I was a tech principle at ThoughtWorks. Tech Principle at TW was always a hands-on role. Amongst other things during that time, I helped to build a point of sale system that if you lived in the UK you would almost certainly have used. I led a team, again in a hands-on in the code, as head of software engineering for LMAX, where we built one of the world' highest performance financial exchanges. Average latency of a trade was 80 micro seconds. This is just a sample, you don't have to agree with me, but a more sensible response would be to show where my arguments are wrong, rather than simply resorting to what I assume you think of as a personal attack. I am a very experienced programmer, doesn't mean I am right. I may have had ideas that I hadn't tried (that is not true, but I may) that doesn't necessarily mean that they are bad ideas. You should learn to evaluate ideas on their merits, what people say is more important than who says it!
    1
  4442. 1
  4443. 1
  4444. 1
  4445. 1
  4446. 1
  4447. 1
  4448. 1
  4449. 1
  4450. 1
  4451. 1
  4452. 1
  4453. 1
  4454. 1
  4455. 1
  4456. 1
  4457. 1
  4458. 1
  4459. 1
  4460. 1
  4461. 1
  4462. 1
  4463. 1
  4464. 1
  4465. 1
  4466. 1
  4467. 1
  4468. 1
  4469. 1
  4470. 1
  4471. 1
  4472. 1
  4473. 1
  4474. 1
  4475. 1
  4476. 1
  4477. 1
  4478. 1
  4479. 1
  4480. 1
  4481. 1
  4482. 1
  4483. 1
  4484. 1
  4485. 1
  4486. 1
  4487. 1
  4488. 1
  4489. 1
  4490. 1
  4491.  @michaelmorris4515  I am afraid that none of your assumptions here are true. First, this is not a theoretical approach, I, and many others, have used this on big complex real-world systems. One of my clients used this approach to test medical devices in hospitals. Another uses it to test scientific instruments. I built one of the world's highest-performance financial exchanges using this approach, and I found out this week, that the tests are still working and providing value 13 years later. I think that your example focuses on the technicalities rather than the behaviour. "I expect the transaction table to show these values" sounds to me like you are leaking implementation detail into your test cases, and that is why they are fragile. What is it that the user really wants? Do they really care about "transaction tables" when they walked up to the computer to do a job, were they thinking "what I need to do is make sure that the transaction table shows these entries"? I doubt it. I can't give you a real example, because I don't know what your app does, but I'd try an capture the intent that the user had. So forgive me for making something up, but lets say that in your case a "transaction" represents selling something, and your "transaction table" represents a list of things, or services, sold. Then I can think of a few scenarios that matter to a user. "I want to be able to buy something and see that I have bought it" (it ends up in the "transaction table"). "I'd like to be able to buy a few things and see a list of the things that I bought" (they all end up in the transaction table). and so on.
    1
  4492. 1
  4493. 1
  4494. 1
  4495. 1
  4496. 1
  4497. 1
  4498. 1
  4499. 1
  4500. 1
  4501. 1
  4502. 1
  4503. 1
  4504. 1
  4505. 1
  4506. 1
  4507. 1
  4508. 1
  4509. 1
  4510. 1
  4511. 1
  4512. 1
  4513. 1
  4514. 1
  4515. 1
  4516. 1
  4517. 1
  4518. 1
  4519. 1
  4520. 1
  4521. 1
  4522. 1
  4523. 1
  4524. 1
  4525. 1
  4526. 1
  4527. 1
  4528. 1
  4529. 1
  4530. 1
  4531. 1
  4532. 1
  4533. 1
  4534. 1
  4535. 1
  4536. 1
  4537. 1
  4538. 1
  4539. 1
  4540. 1
  4541. 1
  4542. 1
  4543. 1
  4544. 1
  4545. 1
  4546. 1
  4547. 1
  4548. 1
  4549. 1
  4550. 1
  4551. 1
  4552. 1
  4553. 1
  4554. 1
  4555. 1
  4556. 1
  4557. 1
  4558. 1
  4559. 1
  4560. 1
  4561. 1
  4562. 1
  4563. 1
  4564. 1
  4565. 1
  4566. 1
  4567. 1
  4568. 1
  4569. 1
  4570. 1
  4571. 1
  4572. 1
  4573. 1
  4574. 1
  4575. 1
  4576. 1
  4577. 1
  4578. 1
  4579. 1
  4580. 1
  4581. 1
  4582. 1
  4583. 1
  4584. 1
  4585. 1
  4586. 1
  4587. 1
  4588. 1
  4589. 1
  4590. 1
  4591. 1
  4592. 1
  4593. 1
  4594. 1
  4595. 1
  4596. 1
  4597. 1
  4598. 1
  4599. 1
  4600. 1
  4601. 1
  4602. 1
  4603. 1
  4604. 1
  4605. 1
  4606. 1
  4607. 1
  4608. 1
  4609. 1
  4610. 1
  4611. 1
  4612. 1
  4613. 1
  4614. 1
  4615. 1
  4616. 1
  4617. 1
  4618. 1
  4619. 1
  4620. 1
  4621. 1
  4622. 1
  4623. 1
  4624. 1
  4625. 1
  4626. 1
  4627. 1
  4628. 1
  4629. 1
  4630. 1
  4631. 1
  4632. 1
  4633.  @kennethgee2004 Sorry, still disagree, but as you say that doesn't mean that I am right. I too have had job titles like "architect" "enterprise architect" and "principal architect", for more years than I care to recall, but none of that means that I am right either. However, I did think it earns me the right to hold an opinion.  The division between "architect", "engineer" and "developer" that you mention at the end are NOT "principles", they are the kind of division that some types of companies tend to apply to job roles, and these are usually not the kinds of company that are building great software. For example, one of Elon Musk's sayings in the context of Tesla and SpaceX (let's not mention Twitter) is that "everyone is chief engineer", which means that EVERYONE is responsible for everything, and are encouraged to "take part" anywhere that their interest and experience takes them. I would say that if you separate the roles in the way that you describe, you will pretty much always get a sub-par result. What you describe is what I would call the "ivory tower model" of software architecture. Everyone, whatever their job, does a better job when they are close to the results of their decisions. I want to see where my dead fail and how, and where they succeed and how. If architects are NOT sometimes working alongside engineers and developers on a frequent basis, they will make mistakes, by skating over complexity that invalidates their ideas. This is probably the commonest form of "SW architecture" in our industry, in my experience.
    1
  4634. 1
  4635. 1
  4636. 1
  4637. 1
  4638. 1
  4639. 1
  4640. 1
  4641. 1
  4642. 1
  4643. 1
  4644. 1
  4645. 1
  4646. Unfortunately, it isn't that simple. It really depends on the nature of your code, much more than the difference between Functional and OO. Function incurs more costs copying things, or providing the illusion of copying things in order to sustain immutability, OO tends to create more transient state so, as you say, garbage collection is more of an issue, but how those things play out in different bits of code, is very specific to those bits of code. The real secret to high performance code is to understand what is going on, so, for example, learn how garbage collection works in your tech, learn how to profile and tune it to meet the needs of the system. I used to write ultra high performance financial systems, 2 spring to mind here, in one we tuned the garbage collection so that that the really costly stop the world kind of sweep would happen less than once per day, and then we reset the system daily so in practice it never happened. In the other we write our oo code so that it was immutable and allocated on the stack, so no CG at all. Neither of these were written as Functional systems. Functional systems in general are not high performance by default, because of all the work that that the languages and compilers do behind the scenes, like enforcing immutability, but I am sure that there are ways of using them and tuning them to do better than the default. It did cross my mind to implement something high performance both ways and see which worked better, but it would be a lot of work and even if I did that I don't think it would help. Performance is more about what we called "Mechanical Sympathy" - understanding how the underlying system works hardware, os,. language, frameworks etc, and using those things efficiently.
    1
  4647. 1
  4648. 1
  4649. 1
  4650. 1
  4651. 1
  4652. 1
  4653. 1
  4654. 1
  4655. 1
  4656. 1
  4657. 1
  4658. 1
  4659. 1
  4660. 1
  4661. 1
  4662. 1
  4663. 1
  4664. 1
  4665. 1
  4666. 1
  4667. 1
  4668. 1
  4669. 1
  4670. 1
  4671. 1
  4672. 1
  4673.  @CarlosVera-jy7iy  I'd think of this as general defensive design. There is a difference between the service that the service provides and the API to that service so a good separation of concerns means that we have code to deal with the API calls and different code to deal with the service that those calls represent. If you send a service a message with, maybe including item, an order a quantity and an account number, I could crack the message, the API call in-line with creating the order, or I could extract the parameters that I am interested in  item = getStringParam(msg, "oder/item") qty=getLongParam(msg, "order/quantity") account-id = getLong(msg, "order/accountId") and then call placeOrder(item, qty, account-id) This is better code than in-lining the cracking of the parameters with the placing of orders, good design says each part of the code should be focused on doing one thing, here we have two, cracking params and placingOrders, and these two things are at VERY different levels of abstraction so combining them will very often lead to problems. As far as testing, the paramCracking helpers, getStringParam getLongParam in my example would have been built with TDD in the abstract, which means that for cracking this specific message there is little testing left to do. does "order/item" map to item etc? I may test that with TDD or integration tests, depending on my design and the rest of the system. the really intersting bit though is the logic in placeOrder which should now be perfectly testable.
    1
  4674. 1
  4675. 1
  4676. 1
  4677. 1
  4678. 1
  4679. 1
  4680. 1
  4681. 1
  4682. There is some research on this, though it is based on small numbers of people being studied, but there have been several studies that tend to agree. In general a pair of people finish a task in 60% of the time of an individual working alone, but the work of the pairs produces significantly higher quality code. I know of multiple studies that say much the same thing, but the real saving in time is less about the effort and more about the dwell times in the process when nothing is happening, in a PR driven org, most PRs spend a significant amount of time, waiting to be reviewed. One of my client says that on average their PRs take about a week to be processed. I don't have data to know whether that is uncommonly long or normal, but I do know that it is certainly NOT unusual. The other saving of effort in pair programming is that there is no catch-up or context-switching time, Because the pair is working together, while they may spend time debating a solution, there is no time spent bringing a "reviewer" up to speed with the problem and the solution. My own, subjective, experience of pair programming is that it is significantly more efficient and effective than working with Pull Requests. Most orgs that I have seen that operate a PR approach have low quality reviews, because they are done as asynchronous and off-line and so the developer doesn't get useful direct feedback. Of course it is "possible" to do a better job than that, but the orgs that I know that practice PRs don't, while the orgs that I know, and have been a part of, that practice pair programming do.
    1
  4683. 1
  4684. 1
  4685. 1
  4686. 1
  4687. 1
  4688. 1
  4689. 1
  4690. 1
  4691. 1
  4692. 1
  4693. 1
  4694. 1
  4695. 1
  4696. 1
  4697. Good argument! I think of myself as a somewhat idealistic pragmatist. I think that, while there is no "one true way" there are ideas that we should rule out because they are dumb (the idealism part) but that humans are fallible, non-rational and are biologically programmed to jump to conclusions based on guesswork rather than apply science, rationality and maths (the pragmatism part). So programming is not maths, because maths is too hard for most of us to do well, and it is a social activity that we need to adapt to be "easy enough" for most of us to do well. One of the problems with programming is that it is such a slippery slope. You can teach young children to write simple code, but it doesn't take much to break simpe code. To build systems that lots of people can use for important things is world-class difficult. I don't mind that, in fact I kind of like that it is difficult, but I think it is a mistake to always be looking for trivial answers, when sometimes the answers are hard. There is no way to make concurrency simple! You can limit how damaging it can be by adopting certain disciplines or approaches, but it is always a world-class difficult problem. Information in different places, chaning is up there with quantum physics (in fact it may be the same problem) in my view. So I think it important that any programming paragidm should, ideally be helping to protect ourselves from some of the more damaging excesses of the slippery-slope of programming. Also, that there is no simple "XX is best" answer, ever.
    1
  4698. 1
  4699. 1
  4700. 1
  4701. 1
  4702. 1
  4703. 1
  4704. 1
  4705. 1
  4706. 1
  4707. 1
  4708. 1
  4709. 1
  4710. 1
  4711. 1
  4712. 1
  4713. 1
  4714. 1
  4715. 1
  4716. 1
  4717. 1
  4718. 1
  4719. 1
  4720. 1
  4721. 1
  4722. 1
  4723. 1
  4724. 1
  4725. 1
  4726. 1
  4727. 1
  4728. 1
  4729. 1
  4730. 1
  4731. 1
  4732. 1
  4733.  @thescourgeofathousan  Sure you can do it badly, but you can do anything badly. I have consulted with many companies that have implemented the strategy that you describe. Their commonest release schedule was measured in months because of the difficulty of assembling a collection of pieces that work together. They have thrown out all the advantages of CI. Sure, different collections of components, services, sub-systems have different levels of coupling, so I think that the best strategy is to define your scope of evaluation to align with "independently deployable units of software" and do CD at that level. I am sorry, I don't mean to be rude, but I don't buy the idea that "citing Google is a call to authority". I don't hold Google on a pedestal, we are talking about ways of working and you said "The best way to manage relationships between SW elements is via CICD pipelines that trigger each other due to events that happen to each related element.".  Google a real world example of not doing that, and succeeding at a massive scale by doing so. So by what measure do you judge "best" by? Not scale, because you reject my Google example. Maybe not speed either, because I can give you examples of orgs working much faster than you can with your strategy - Tesla can change the design of he car, and the factory that produces it in under 3 hours - but is that a call to authority too? How about quality? The small team that I led built one, if not the, highest performance financial exchanges in the world. We could get the answer to "is our SW releasable" in under 1 hour for our entire enterprise system, any change whatever its nature in under 1 hour, and we were in prod for 13 months & 5 days before the first defect was noticed by a user. Finally there is data, read the State of DevOps reports, and the Accelerate book. They describe the most scientifically justifiable approach to analysing performance in our industry based on over 33k respondents so far. They measure Stability & Throughput, and can predict outcomes, like wether you company will make more money or not, based on their approach to SW dev. They say that if you can't determine the releasability of your SW at least once per day, then you SW will statistically be lower quality (measured by stability) and you will produce it more slowly (measured by Throughput). If your system is small, and very simple, it is possible that you can build a system like you described, with chained pipelines, that can answer the question "is my change releasable" for everyone on the team once per day. But I don't believe that this approach scales to SW more complex than the very simple and is able to achieve that. I have not seen it work so far. The long time it takes to get that answer, the more difficult it is to stay on top of failures and understand what is "the current version" for your system. I really don't mean to be rude, but this really isn't the "best way", at best it is "sometimes survivable" in my experience.The best way that I have seen working so far, is to match the scope of evaluation to "independently deployable units of software", and the easiest way to do that is to have everything that constitutes that "deployable unit" in the same repo, whatever it's scale.
    1
  4734. 1
  4735. 1
  4736. 1
  4737. 1
  4738. 1
  4739. 1
  4740. 1
  4741. 1
  4742. 1
  4743. 1
  4744. It is an interesting myth in our industry that this is a young-persons profession. The reason that most people are young is that our profession has grown so fast. There are lots of people over 50 writing very good software, but they are in a tiny minority, because when we were in our 20's the whole industry was lots smaller. Speaking as someone in that 'old-programmer' category, what you are good at certainly changes as you grow older. My memory for some things is worse, I used to know every feature of the languages and tools that I used. Now I have to look more of those things up if I don't use them regularly. That is a function of my memory getting a bit worse, but also of the growth in complexity of tools and languages. My experience is much broader now though, that I am confident that I can write software to solve any problem solvable with software, and do a decent job. Because I feel that know what the fundamental principles are, and trust myself to be able to work through a problem. That doesn't mean that I claim to know all the answers, but I do know how to go about finding the answers. I can design bigger more complex systems than I used to be able to because I am much better at design now, and know what that takes to evolve a great design. I think that I have gained a more holistic view of software development over the years. So I wouldn't worry that this is a limited-time career. Having said that, at some point AI will be better at this stuff than us humans, but we will have more to worry about than just job-security at that point.😳
    1
  4745. 1
  4746. 1
  4747. 1
  4748. 1
  4749. 1
  4750. 1
  4751. 1
  4752. 1
  4753. 1
  4754. 1
  4755. 1
  4756. 1
  4757. 1
  4758. 1
  4759. 1
  4760. 1
  4761. 1
  4762. 1
  4763. 1
  4764. 1
  4765. 1
  4766. 1
  4767. 1
  4768. 1
  4769. 1
  4770. 1
  4771. 1
  4772. 1
  4773. 1
  4774. 1
  4775. 1
  4776. 1
  4777. 1
  4778. 1
  4779. 1
  4780. 1
  4781. 1
  4782. 1
  4783. 1
  4784. 1
  4785. 1
  4786. 1
  4787. 1
  4788. 1
  4789. 1
  4790. 1
  4791. 1
  4792. 1
  4793. 1
  4794. 1
  4795. 1
  4796. 1
  4797. 1
  4798. 1
  4799. 1
  4800. 1
  4801. 1
  4802. 1
  4803. 1
  4804. 1
  4805. 1
  4806. 1
  4807. 1
  4808. 1
  4809. 1
  4810. 1
  4811. 1
  4812. 1
  4813. 1
  4814. 1
  4815. 1
  4816. 1
  4817. 1
  4818. 1
  4819. 1