Comments by "Scott Franco" (@scottfranco1962) on "Continuous Delivery"
channel.
-
I have taken some heat before when joining projects that were in bad shape and the prevailing opinion was that they wanted to start over. I say, "you'll just make the same mistakes over again". I read a great story about a professor who took a position in a college for the electrical engineering section. The lab for it was in terrible shape, the instruments were all broken. The administrator asked the new professor to make a list of needed equipment and he would see if he could find the money for it.
The new professor replied "no, not a problem. We will use what we have". The administrator left, stunned. The new professor started his classes and took the new students out to the lab. Over a course of months, they took apart the broken equipment, got schematics for them, and went over what was wrong with each instrument as a group project. Slowly but surely, they got most of it working again. The students that did this because some of the best engineers the school had seen.
The moral of the story is applicable to software rewrites. The team that abandons the software and starts over does not learn anything from the existing software, even if they didn't write it. They create a new big mess to replace the old big mess. Contrast that with a team that is forced to refactor the code. They learn the mistakes of the code, how to fix it, and, perhaps more importantly of all, become experts at refactoring code.
In the last 2 years, I instituted a goal for myself that I would track down even "insignificant" problems in my code, and go after the hardest problems first. In that time I have been amazed at how often a "trivial" problem turned out to illustrate a deep and serious error in the code. Similarly, I have been amazed at how solving hard problems first makes the rest of the code go that much easier.
I have always been a fan of continuous integration without calling it that. I simply always suspected that the longer it took to remerge a branch in the code, the longer it would take to reintegrate it, vs. small changes and improvements taking a day or so. I can't take credit for this realization. Too many times I have been assigned to merge projects that were complete messes because of the long span of branch development. As the old saw goes, the better you perform such tasks the more of it you will get, especially if others show no competence in it.
93
-
4
-
On my best projects, I keep the tests, the code, and the compete report generated by the test, which is time and date stamped, in the repo. When I worked at Cisco Systems, we went one better than that and kept the entire compiler chain in the repo, including compiler, linker, tools, etc.
I teach the init-test-teardown model of individual tests, and one of the first things I do when entering a project is mix up the order of the individual tests in the run. This makes them fail depressingly often. Most test programmers don't realize that their tests often depend inadvertently on previous test runs to set state in the hardware or simulation. I do understand your point about running them in parallel, but admit I would rather run then in series, then mix them up in order. Why? Because running them in parallel can generate seemingly random errors, and more important, aren't repeatable. I would run them in order, then mixed order, then lastly in parallel because of this.
Finally, many testers don't understand that testing needs to be both positive and negative. Most just test for positive results. Testing that the target should FAIL for bad inputs is as important, or I would say MORE important, than positive tests, since they go to the robustness of the system. Further, we need to borrow concepts from the hardware test world and adopt coverage and failure injection.
4
-
I think of software design as a parallel to architecture. It has merged with art a bit, and heavy on engineering. There are objectively "good" buildings and "bad" buildings, but over the centuries, we have come to understand that poorly designed buildings fall down and kill people, a lot of them.
Software today is divided into life critical applications and non-life critical applications. I have worked on both (medical applications). The problem is that there is not enough recognition that software projects fall down. Our complexity is simply out of control, and many projects end when the software has too many bugs and not enough understanding. Programmers move on; the code was not that well understood to begin with. Most software isn't designed to be read. Printed out, its only useful in the toilet, which dovetails nicely with today's idea that software should not be printed. In the old days (1960s era), it was common to keep programs in printed form, usually annotated by the keeper. If I dare to suggest that a given bit of code is ugly, I am told that nobody is ever going to look at it, and it is going to be discarded shortly in any case.
If we are engineers, we are a funny bit. Electronic engineers don't produce schematics that are messes of spaghetti without much (or any) annotation. Same with mechanical engineers, or (say) architects. I'd like to say that software is a new science, and we are going to evolve out of this phase, but I don't think I will see it in my lifetime.
3
-
2
-
1
-
I call what is discussed here "traverse refactoring", that is, rearranging the code or adding routines to the code to support an upcoming feature, but that does not completely implement it. I break this down into two types:
1. Refactoring the code to make it easier to support an upcoming feature.
2. Adding routines/classes needed by the new feature.
Both of these don't break the code, and thus can be committed to mainline without affecting it. And perhaps equally important, these improvements can be removed if they don't work out.
The reason I call this traverse refactoring is from mountain climbing. If you are climbing up the face of a rock and realize it is going to be too difficult, you move sideways or "traverse" the rock face to find a position where continuing upwards is easier.
A few comments on other items described in the video:
Feature branching is an invitation to rebase hell. I have never been on a project with a significant amount of people (>3) where the code was not falling behind rapidly. This means rebasing, either frequently or all bunched up just before the merge.
Having "flags in the code"... this makes my head hurt. Two flags means 4 combinations. Three means 8, 4 means 16 combinations, etc. IE., you rapidly lose control of the codebase. Further, most of the code in a feature does not affect other code, meaning that you are only including it in tests (and in compiles if you are #ifdefing!) if the flag is on. Yes this method is common. No I am not a fan.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Weaseldog2001 Well, yes, and no. Now we are in to test theory. If a test problem needs to be fixed by better cleanup and the end of the test, does that not imply that the initialization of the next test is a problem? It is clearly not able to bring the system to a stable state before the test.
We had a test unit "farm" at Arista, the idea was that there was a large pool of hardware test units, and the software could take a test run request, grab an available unit, run the tests, and release again. The biggest issue with it was that machines regularly went "offline", meaning they were stuck in an indeterminate state and could no longer be used. This was even after cycling power for the unit and rebooting. The problem was solved by taking some pretty heroic steps to restart the machine, as I recall even rewriting the firmware for the machine.
1
-
1
-
To me the prime example is a compiler, one of which I happen to be working on at the moment. If your compiler does not work, you are seriously scr*wed. Everyone knows this. Thus compilers simply work, and are reliable most of the time. If they are not they die quickly, since nobody uses them.
Thus the question becomes, compilers are a large and complex codebase. If we can get those right, why can't other programs be proven correct as well? The answer is that compiler developers simply take it as a given that the testing code for the compiler will be %50 of the total work to develop the compiler as the other half, developing the main compiler code.
So this means that for most programs, its not worth it to spend that kind of effort to prove the program is accurate no? There in lies the paradox. A typical program takes %50 of the total development time or more in debugging. Even very optimistic programmers will admit to that. By that same logic, saying you want to write the program, then do the work to debug it into shape means you prefer to fix the program AFTER the fact than BEFORE the fact, which is the net argument against TDD.
In a word, you can pay now, or pay later.
1
-
The issue I have with IDEs is they are made to difficult to customize. I don't like templates or automatic formatting, it feels too much like I am fighting with another person for control of the editor. Yet in Eclipse, for example, its a huge job to turn these features OFF, and there are some aspects of autoformatting that simply cannot be turned off at all (the subject of many a stackoverflow post). The other issue is that IDEs don't understand you may be looking for an IDE as an alternative to an editor, since it is difficult to, well, just edit a file. Often the IDE requires that you register files in a project. Give me an IDE that I can edit a system file with (for example) and that is a generally useful tool. What's wrong with "ide file"? Programs that work in familiar ways as a base encourage use. Otherwise it is like "our IDE is so great, you have to take a course to use it". Finally, the major advantage of vi/vim is that you can use it ANYWHERE, including a ssh connection without the hassle of an Xwindows connection. What's wrong with giving a text only option for an IDE that can be used the same way?
1
-
1
-
Companies that have practices that you think need changes to solve programmer shortages, like hiring checklist programmers, unwilling to train, having age limits, etc, these companies aren't stupid. Thus I think it is reasonable to assume the "programmer shortage" is overblown. The Wall St. Journal has run some good articles on why the "open programmer positions" is a mostly fictional statistic, like positions advertised with no intention to fill, positions that were already assigned to a green card worker but required to advertise by the conditions of HB1 visas, and on and on. Any shortage should produce higher wages, and indeed, programmers are generally paid well. However the feeling with employers is that most software jobs can be divided up and given to new hires who are cheaper than one or two highly experienced programmers.
I have been in the industry for 40 years, and have lived through several "programmer shortage" waves. The biggest ones, like in the early 1990's, caused a wave of programmer graduates who mostly ended up taking jobs outside of the industry based on my own personal experience. I don't think this has really changed. Here in silicon valley I take Ubers and have many times found out that the driver was a programmer who could not find a job.
1
-
1