Comments by "Slipoch" (@slipoch6635) on "Continuous Delivery" channel.

  1. 3
  2. 3
  3. 3
  4. 2
  5. 2
  6. 2
  7. 2
  8. 1
  9. 1
  10. Hmm, well the bloke at MS I know was working waterfall post NT4. Unsure what specifically on but he's an assembler programmer so it would be very low level boot/driver/OS work. I'm unsure why you think a small team means waterfall wasn't followed? I was part of a 3 person team working on application software and we had to do the project using waterfall. Doesn't matter anyways it just seemed an odd assumption. Os2/warp (round 95 days) was actually pretty decent IME but IBM never bothered to really push it for non-business use which is a shame as it was working considerably better than Win 95 and pre-sp2 98. Getting decent software for it was a bugger though. One of the more famous groups that used waterfall for many years (They may still) was Toyota (kanban boards were used within their waterfall system) for their manufacturing software. Cisco has and still uses both Agile and Waterfall for different things. As mentioned earlier a LOT of older 3d rendering and engineering/architectural software was developed using waterfall. I have worked in several jobs where full-functional software has been released and 1 major version has been supported and updated over more than 10 years and it was developed using waterfall. I have also worked agilely in small and larger teams and have also supported original waterfall projects using agile for updates. Now patching, updating, anything with ongoing development and large change etc. I prefer to use Agile, but if we want a robust code structure for a new project particularly if the end-user is not going to be involved in the development at this stage, then I think it's either CI with a shedton of qualitative & edge case tests and code reviews/pair programming (or other fallbacks to reduce poor code quality) or map the planned software out using a waterfall methodology, if you then use agile for the implementation stage, this is fine, but the amount of crap I have seen in code where one thing has been done a few different times in the same codebase because a new TM didn't know it had already been done elsewhere is pretty much something I see on every agile project I have worked on (none have been CI/CD). The rate of this issue occurring in waterfall was greatly reduced as there is less unplanned change. perhaps you could do a vid on your fave methods for ensuring the existing team is replaceble with others that do not have full code coverage and how you would structure something so that the wheel is not reinvented for a new feature when it exists elsewhere even when it may not be obvious? I go with the flow for the most part. If I find we need a lot more planning I'll do the planning. If something is a pain to modify each time we touch it, then I will flowchart it, mapping out what it is actually doing and every point of the software that touches it. If it is small change I will just get stuck in and do it. If it is large I will change the flowchart and try to push it to be more modular and efficient (and more obvious). The above has resulted in one piece of layered agile import code, instead of taking 2-3 hours (often getting timeouts) and using 4-6gb of RAM, to taking 10 minutes and using 500MB. It also avoided a couple of edge-case gotchas later on that would have killed it (and would not have been caught by the tests) when the data imported was a bit weird (this was a very oddly designed data set and was very mutable, but the client had no control over it) . Horses for courses, whatever is the simplest way to make it a robust long-term solution should be the way to follow. Usually I use agility, but for more planned, less mutable work I will use a general overview waterfall with agile for the actual implementation phase and to handle any change to the overall plan.
    1
  11. 1
  12. 1
  13. 1
  14. 1
  15. 1
  16. 1
  17. 1
  18. 1