Youtube comments of (@retagainez).

  1. 1100
  2. 71
  3. 67
  4. 61
  5. 43
  6. 31
  7. 25
  8. 15
  9. 14
  10. 13
  11. 13
  12. 12
  13. 12
  14. 10
  15. 10
  16. 10
  17. 9
  18. 8
  19. 8
  20. 7
  21. 7
  22. 6
  23. 6
  24. 6
  25. 6
  26. 6
  27. 5
  28. 5
  29. 5
  30. 5
  31. 5
  32. 5
  33. 4
  34. 4
  35. 4
  36. 4
  37. He probably omits PRs because he thinks pair programming is a better way to do code reviews (it might not work with open source), but it doesn't make any difference. If you find PRs prevent you from using this methodology, it probably points to an issue with the code review, not everything else. For your last question, it takes a lot of trust in your team; believe they will code using BDD/TDD and run unit tests before pushing. It is far faster to run something locally than remotely regarding unit tests. If your tests fail at integration (your integration tests), your unit tests are insufficient. Your design needs to be simplified, and you need to break it apart or have better interfaces with well-defined ways to speak to other components. The same needs to be considered for unit tests that run far too long. Suppose your unit testing is too simplistic to detect issues between things you might think are "external components" external to the module you worked on. In that case, your code needs better interfaces between the two systems. As for the rest of your questions, MANY resources cover these topics as the result of decades worth of software development. The XP book is an example of this, written decades ago. I tried to consider your question and any other questions you might have in this explanation. If something needs clarification, I will try to explain it my best. This is my understanding from looking at Dave Farley's videos on CD for the past two years and reading his CD book (although I admit I need to re-read it sometime soon.)
    4
  38. 4
  39. 4
  40. 4
  41. 4
  42. 4
  43. 3
  44. 3
  45. 3
  46. 3
  47. 3
  48. 3
  49. 3
  50. 3
  51. 3
  52. 3
  53. 3
  54. 3
  55. 3
  56. 3
  57. 3
  58. 3
  59. 3
  60. 3
  61. 3
  62. 3
  63. 3
  64. 3
  65. 3
  66. 3
  67. 2
  68. 2
  69. 2
  70. 2
  71. 2
  72. 2
  73. 2
  74. 2
  75. 2
  76. 2
  77. 2
  78. 2
  79. 2
  80. 2
  81. 2
  82. 2
  83. That's what I typically understood Clean Code to mean in the first place. Is there something that states otherwise? That clean code is meant to prevent bugs instead of just having well-structured code? Isn't creating your own well-structured set of principles in effect your own interpretation of Clean Code ? The opening argument that Clean Code is about finding out about "the places that changes need to be made now" is something more like what Working Effectively with Legacy Code discusses. Every other point after that where you explain how you don't read the entire file or "read top to bottom" consistently lines up with Clean Code principles, so it's odd and slightly hypocritical to claim to be against one thing but still advocate for similar points. Most of what your code example shows about the so-called 'includer' shows a particularly erroneous example to what Clean Code might mean. I personally think the example might be coupled to the setup/teardown of the fitnesse library. It took me about a minute to arrive at that just from reading function names though, and the odd one-liner here or there. So i think the code itself is clean and clear, but perhaps the naming and practicality of setting up such a test might be lost on people and this is the problem with the example. Otherwise, it's relatively clear. Am I wrong about this? The content appends suggest that I'm more-or-less on the right track. Reading bottom-up is when you have no test suite. It is completely fair to say that companies end up in awful situations without test suites, so all that's left is to read bottom-up. This is what the Working Effectively with Legacy Code book is about though. I think the strength of Clean Code or agile is that it is vague. Clean Code in particular has always pushed the need to develop an intuition in my mind which vaguely aligns with various principles, but doesn't contradict them. If you have a code smell or something you generally dislike, Clean Code is generally supposed to help you recall the typical things you might check off and analyze to solve the problem.
    2
  84. 2
  85. 2
  86. 2
  87. 2
  88. 2
  89. 2
  90. 2
  91. 2
  92. 2
  93. 2
  94. 2
  95. 2
  96. 2
  97. 2
  98. 2
  99. 2
  100. 2
  101. 2
  102. 2
  103. 2
  104. 2
  105. 2
  106. 2
  107. 2
  108. 2
  109. 2
  110. 2
  111. 2
  112. 2
  113. 2
  114. 2
  115. 2
  116. 2
  117. 2
  118. 2
  119. 2
  120. 2
  121. 2
  122. 2
  123. 2
  124. 2
  125. 2
  126. 2
  127. 2
  128. 2
  129. 2
  130. 2
  131. 2
  132. 2
  133. 2
  134. 2
  135. 2
  136. 2
  137. 1
  138. 1
  139. 1
  140. 1
  141. 1
  142. 1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. 1
  158. 1
  159. 1
  160. 1
  161. 1
  162. 1
  163. 1
  164. 1
  165. 1
  166. 1
  167. 1
  168. 1
  169. 1
  170. 1
  171. 1
  172. 1
  173. 1
  174. 1
  175. 1
  176. 1
  177. 1
  178. 1
  179. 1
  180. 1
  181. 1
  182. 1
  183. 1
  184. 1
  185. 1
  186. 1
  187. 1
  188. 1
  189. 1
  190. 1
  191. 1
  192. 1
  193. 1
  194. 1
  195. 1
  196. 1
  197.  @Orneyrocks1609  Well just give the Romans guns and there's no technological edge. What's the point? You're changing the whole point of everything by saying that. Romans had tactics and strategies based on their current technology, not just because; the same applies to the platoon. Steel shields, sure it can work but it will be heavy and does not guarantee protection to vital locations, the platoon can retreat since they will have a much lighter load; this problem expands when considering steel armor. Hypothetically speaking, as you mentioned with javelin volleys, if you have thousands of troops 20 meters away from a platoon of anything, of course, the thousands of troops win. Without any more hypotheticals, how do you get those troops to advance towards the platoon, hundreds if not thousands of yards away without suffering any of the morale or mobility issues that I mentioned (Not even accounting for the hypothetical idea of steel armored Romans which surely adds a significant amount of weight)? You continue to not acknowledge that the loss of a leader leads to disarray. Having a system of leadership does not guarantee that all the leaders think in a hivemind. Different leaders have different approaches, tactics, ways of problem-solving, leading to chaos or disorder since you cannot really predict which leader will fall. The purpose of a leader is to coordinate movements and to relay information, not to just simply exist and pep talk everybody. How can you achieve this when the leader is dying at an unusually fast rate? How can you rely on the soldier to follow when he sees the foundation of his military strength crumble around him? And this all occurs before the Romans can even have the possibility of reaching the platoon in time. How can you rely on the next leader in line to continue the order to charge with the imminent risk of quick and sure death? They are the new leader after all, RETREAT AND LIVE TO NOT FIGHT ANOTHER DAY!
    1
  198. 1
  199. 1
  200. 1
  201. 1
  202. 1
  203. 1
  204. 1
  205. 1
  206. 1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. 1
  214. 1
  215. 1
  216. 1
  217. 1
  218. 1
  219. 1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230. 1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. 1
  240. 1
  241. 1
  242. 1
  243. 1
  244. 1
  245. 1
  246. 1
  247. 1
  248. 1
  249. 1
  250. 1
  251. 1
  252. 1
  253. 1
  254. 1
  255. 1
  256.  @AlexGnok  CI (and CD by extension) is going to be hard for your company if its difficult for a lot of your developers to work together in a single branch AND implement automated "QA" into more facets of the software development. Trust is a big factor. Trust in that the tests are correct, that whoever wrote them wrote a useful test. Trusting your "neighbors" is also important, such that they don't break your code when they add a commit. Some of your testing may be manual, but the point is to reduce how much of it is manual. Testing accessibility can be hard when there aren't any tools, you might probably have to write your own tooling to assess the level of accessibility of your app, and it can be useful to do that on some low-hanging fruit to get a small sneak-peek into how a more dedicated solution would help. I am sure there are objective ways to measure how accessible something is. For browsers at least, there is a plethora of automated tools. There's a whole idea of "Web Content Accessibility Guidelines" and automated tools that flag errors on those guidelines. Now, I will risk some controversy here by saying maybe the problem is not that many people are junior level, but rather something much simpler but viewed at a layer higher than simply evaluating individual developers. Maybe that company isn't a shining example on how to write software. Or maybe accessibility isn't very important yet. Up to you if think its worth it. Somehow, somebody thinks it's all worth some $ amount.
    1
  257. 1
  258. 1
  259. 1
  260. 1
  261. 1
  262. 1
  263. 1
  264. 1
  265. 1
  266. 1
  267. 1
  268.  @reneb86  Java is just a tool, like any other programming language. The ideas of more recent engineering practices involve automated test suites, IaC, and CI/CD. The video game industry has a very distinct gap in knowledge and experience pursuing such things, I've noticed. So much so that they advocate against these ideas in favor of "hardcore programming", all the while it's a known anti-pattern and hurts code design to not implement them. I think there's a culture and clique within the gaming industry which focuses too heavily on gaming aspects rather than engineering a good game. In my eyes, if you have the proper development environment you will only focus even more on the broad and high-level picture of your game, the more important bits, rather than get stuck in a low-level coding rut and focus on lines of code or something tedious. There's a reason why some of the most renowned and long-living games have focused on excellent tooling for content creation and mod-ability. Testing the tooling becomes the focus, rather than testing the game. You get a well-tested game purely as an accident. It's a philosophical approach and one that is largely lost in today's non-visionistic games. Java has an ecosystem which promotes CI/CD, automated testing, and so forth, but Java might not be a great tool for game development alone. (Though there are projects that argue against this idea anyway). The point is, language choice and tooling choice doesn't matter. What matters is you choose your tooling and create it as needed, and also test it well. This accidentally solves the question of "how do I test my game?" and lets you focus on validating harder-to-test things like UI/UX.
    1
  269. ​ @reneb86  Thank you for your thoughts, and I understand that game developers might use CI / CD tooling (and this doesn't come to me as a surprise, particularly if we reluctantly view games like Fortnite), but it needs to be addressed that all kinds of devs still largely misinterpret the overall message of something that is a principle, not tooling. Same with automated testing. I do not disagree with much of what you say, but I often feel it's the reflection of the differences of the overall game dev culture (and similarly reflective of overly-pragmatic engineers in ordinary enterprise programming). Still, even with pragmatism in mind, you can still design well and not sacrifice things like design and abstraction. For simplicity's sake of calling it "enterprise programming," enterprise software engineers do indeed iterate quickly and pragmatically, just as game devs do. But this doesn't affect their ability to embrace and understand CI / CD, XP, DDD, BDD, and TDD. Security companies for example are faced with rigor and flexibility issues. Same with NASA, where they needed critical components of software, but not at the expense of design and test-ability. Like I said, I feel that the video game industry is exceptionally behind in practices for the sake of pragmatism. Favorably, for game devs, it is far more subjective to have certain bugs. Similar to game devs, it is often seen in "agile" enterprises that claim they use CI / CD tools, but misinterpret principle completely. Case in the point (and I'm making a very large distinction here and trying not to sound pedant), CI / CD is most certainly not a tool. Likewise, regression testing is indeed automated testing, but the emphasis here lies once again on a singular tool. Talking about regression testing inherently misses the point of what test harnesses and unit tests do. When referring to "getting lost in the low-level" aspects of programming, I'm often not thinking about optimization. Instead I'm talking about not being able to think abstractly enough because your mind is too tied up with low-level complexity. In the optimization argument, non-functional testing is most certainly a possible test scenario, and can be covered in critical areas. Instead, what I am often thinking about when we discuss "getting stuck in the low-level thinking," it is about the over-emphasis on low-level code design. Engineers that intentionally live in very low-level and high cognitive load areas of the code while dealing with high-level features are not abstracting correctly and not organized correctly (Conway's law). It's not a matter of whether they've gone high enough in the abstraction tree, it's that their design is fundamentally flawed, they made the wrong kind of tree. Optimization has its own unique thought process that shouldn't be conflated with refactoring and general code design. Examples below: NASA’s Mars rovers use component-level testing (e.g., ChemCam lasers in vacuum chambers) to validate subsystems before integration, mirroring CI/CD’s "shift-left" testing ethos. Enterprise Java ecosystems (e.g., Spring Boot) institutionalize practices like TDD and modular design, even when pragmatism demands rapid iteration. Intel’s microprocessor validation involves fuzzing ALUs with billions of test vectors before chip integration—akin to unit testing in software. Even with a monolithic game dev like Epic Games, we still see issues where Legacy codebases (e.g., Unreal Engine) resist refactoring due to intertwined systems, even despite their attitude to CI / CD. Non-functional testing (e.g., load testing GPU code) can be automated, as seen in automotive crash simulation.
    1
  270. 1
  271. 1
  272. 1
  273. 1
  274. 1
  275. 1
  276. 1
  277. 1
  278. 1
  279. 1
  280. 1
  281. 1
  282. 1
  283. 1
  284. 1
  285. 1
  286. 1
  287. 1
  288. 1
  289. 1
  290. 1
  291. 1
  292. 1
  293. 1
  294. 1
  295. 1
  296. 1
  297. 1
  298. 1
  299. 1
  300. 1
  301. 1
  302. 1
  303. 1
  304. 1
  305. 1
  306. 1
  307. 1
  308. 1
  309. 1
  310. 1
  311. 1
  312. 1
  313. 1
  314. 1
  315. 1
  316. 1
  317. 1
  318. 1
  319. 1
  320. 1
  321. 1
  322. 1
  323. 1
  324. 1
  325.  @KillerOfTheShadow  ​Well, it actually changed significantly for a lot of America. Rather than being portrayed as the soldier who killed a father on father's day and a child, it explained he was being prosecuted extremely for such a minor charge. The jury found him innocent 6 to 1 on all but the one charge, which he did not deny, the photograph. All other charges were dropped due to a lot of questionable evidence and hearsay. It exposed a huge lack of transparency, how the military can violate the most basic of civil rights, even for service members, and lastly the dangerous ego and political backstabbery within the Navy. Simply said, there was no due process. His wife managed to convince the former president to step in, which caused the case to turn out the way it had, and allowed him to coordinate with lawyers more effectively rather than being placed in solitary confinement for his whole trial and imprisoned for life. This was a pretty big case for something so seldomly discussed and poorly understood past the initial reporting provided by the large news outlets. If you're willing to look into this some more sources, navytimes had a great article where the journalist discusses how NCIS got rid of some of their people afterward due to the way the case turned out. The NCIS and Navy sure went to some extreme lengths to protect their image. It just goes to show how underappreciated and how little you have in terms of independence and rights when you join the military. The reporting from AP, NYTimes, and most of the other news outlets was done extremely poorly. They unknowingly relied on intentional "leaks" from the gov't and focused primarily on the NCIS's cherry-picking of evidence rather than aggregating all the evidence to see whether the case was sound in court, which if they did, it would have shown that the prosecution had unreliable evidence and poor intent. The big news outlets had little to no investigative journalism and it is quite scary if you look at it like that.
    1
  326. 1
  327. 1
  328. 1
  329. 1
  330. 1
  331. 1
  332. 1
  333. 1
  334. 1
  335. 1
  336. 1
  337. 1
  338. 1
  339. 1
  340. 1
  341. 1
  342. 1
  343. 1
  344. 1
  345. 1
  346. 1
  347. 1
  348. 1
  349. 1
  350. 1
  351. 1
  352. 1
  353. 1
  354. 1
  355. 1
  356. 1
  357. 1
  358. 1
  359. 1
  360. 1
  361. 1
  362. 1
  363. 1
  364. 1
  365. 1
  366. 1
  367. 1
  368. 1
  369. 1
  370. 1
  371. 1
  372. 1
  373. 1
  374. 1
  375. 1
  376. 1
  377. 1
  378. 1
  379. 1
  380. 1
  381. 1
  382. 1
  383. 1
  384. 1
  385. 1
  386. 1
  387. 1
  388. 1
  389. 1
  390. 1
  391. 1
  392. 1
  393. 1
  394. 1
  395. 1
  396. 1
  397. 1
  398. 1
  399. 1
  400. 1
  401. 1
  402. 1
  403. 1
  404. 1
  405. 1
  406. 1
  407. 1
  408. 1
  409. 1
  410. 1
  411. 1
  412. 1
  413. 1
  414. 1
  415. 1
  416.  @krYrrr  I wonder the same thing, depends on perspective. If she has savings, it's probably not that big of a deal. For HR people, I already feel the stress of just dealing with HR people from an interview perspective, I can't imagine what it must feel like from a firing perspective from their end. I suppose some people can thrive in a small world, but I find it hard to see that HR people don't feel conflicted that their imprint on the world regularly involves the suffering of other people for the sake of appeasing some faceless corporate entity that doesn't care for any single person. The company has a red flag of wasting of finances on these absurd processes. The tech isn't exactly bleeding edge, there are competitors, and so it's nothing to sacrifice your own personal life to. The corporate leaders that initiated this entire controversy are still in place. I'm speaking from my anecdote of being a late joiner to a public company that finally met an "end of the line" from a long history of mass firings (restructurings), only keeping the top 1% who were "long tenure" employees that were members of the original (overly large behemoth company) that went through repetitions in history experiencing companies hirings/mergers/firings. One particular event involving significant financial/tax fraud, and a final nail in the coffin due to bankruptcy. Anyway, my anecdote goes to show corporate KPIs/metrics really mean nothing, except for their finances perhaps. The common tie-in between these companies that constantly restructured was the finances. Hiring software guys is expensive, keeping your software running is expensive, some software is just not worth it.
    1
  417. 1
  418. 1
  419. 1
  420. 1
  421. 1
  422. 1
  423. 1
  424. 1
  425. 1
  426. 1