Comments by "MrAbrazildo" (@MrAbrazildo) on "ThePrimeTime"
channel.
-
1
-
11:22, "Dream Job" doesn't exist. If it is 'dream', you'll have a low wage, for what you deliver. If it's a "cashy" 1, then it's good only for you, crazy 1. All jobs suck hard. The only exception is if 1 can do something that nobody else can. But even so, the dude will earn due to "monopoly" (of a talent), not because of all his technical skills. 22:49, indeed.
27:21, my code is spaghetti. If it's readable or not, it's a matter of taste. Computer likes it.
30:07, brazilian meme.
1
-
1
-
1
-
1
-
5:31, I don't know what is so bad about C++, like these kind of people use to say. It has all those types you mentioned, like classes that are not defined at compile time (interfaces), "simple structs with methods" (classes), foreach as algorithm and in the language core too, optional types, and so on.
7:15, C++ has the optional type, for a type that may be valid. But has a better solution, if 1 adds the GSL library, the NotNullPtr class (or some name alike), providing Zig's not nullable. It's also possible to develop your own pointer like that, and it doesn't take much longer.
7:49, so there's no C++'s namespace on those languages, huh? It works like a surname for a library. Each 1 has its own, so name conflicts never happen. It's also possible to dismiss yourself from typing that all the time, if you are sure it won't conflict.
8:51, copied from C++, which also has them as default parameter, meaning that 1 doesn't need to explicitly send them on initialization. 9:00, and if "you forget to clean things up", it'll do that for you, no messages needed. 10:05, it means 1 doesn't even need to do a deallocation.
11:20, people are contradictory: they love the "error or variable", for a variable, but at the same time they are afraid of "NULL or a pointer", for a pointer! What's the logic?!
18:57, yes, undefined means it'll initialize that memory taking the "trash values" left in there, from other variables previously freed (no longer exist). C/C++ has this by default (faster initialization), meaning 1 doesn't need to lose time typing ' = undefined'.
23:26, actually it's much better, because it's supposed to has several public f()s, only asking you to type 'public' keyword 1 time only!
23:43, 1) f() is not attributing something necessarily, so there's no need for the = operator. 2) f() doesn't know what the user will do with the variable (changing it, for instance). That's why it doesn't use to be 'const', although it's possible in C++, just not recommended. 3) Specifying the returning type improves compilation time. In my experience, it's better/safer to declare it automatic (fn, var, auto, depending on the language) during development, and switching it to its explicit type, after finished.
27:08, C++ is smoother, dismissing you from typing usize, deducting sum and the returning type as int, due to its default for integer literals.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
3:30, abstractions are the best thing, but can also turn back against the dev. For C++, I take a FP approach by default, until some variable can cause too much damage, if changed wrongly or from a wrong place. Then it goes to a class, to control everything about it. I 1st start with free things, then tie down some critical things - "decoupling" is not welcomed for those cases. So my code has many more free f()s than classes.
Complexity is not inside only 1 f(). If it's certain that a bug is inside 1 f(), it's just a matter of (short) time to be solved, doesn't matter how complex that f() is. It's like a lion trapped in a cage: just study it, and tame the issue.
The nightmare happens when 1 needs to travel throughout the project f()s, searching where it might had started. This is the main reason to write classes to restrict who can change critical data.
Let's say someone is coding a football (soccer) game. It could has a class for ball, players/actors. To coordinate when a goal is made, and its consequences, changing variables in more than 1 class, I use to have a class to tie those things together. It could be called referee. So public Referee::verify_and_change_if_goal would be the only or few f()s allowed to call private f()s Ball::goal_restart (to put ball in the middle of the field) and Player::goal_restart (to put players into their half of field, in certain locations, with some random variance towards that location, to seems more realistic, less robotic) .
So that Referee public f() can change the world, from any point where its object appears. Bad design! Actually, no. The verifications would be made inside Referee (lion in the cage), only changing variables in case of goal. So doesn't matter if it's called several times, even by mistake: the worst possible thing is to loose some performance; it won't ever bug the game. It doesn't even matter if the code grows up to 1 billion LoC: those things will continue locked.
But let's imagine the worst scenario: some internal error happened inside this chain of calls, and junior dev decided to shortcut it, creating his own way to change variables after the goal:
1) He would get compile errors because, let's say, the main f() who calls the Referee public f(), and now is calling junior's, is not 'friend' of those classes. Junior turnaround it:
2) made the main f() friend of all those classes, so that he can write his own way. On the next error, some senior will see the class definition, and think: "Wait: why main is friend?!" . But let's make it more complex. Instead of that, junior:
3) pulled Ball::goal_restart and Player::goal_restart to public area. A senior may think those were always public. This is awkward, because some error might happen, by calling 1 f() and not the other (i.e: Ball's but not Player's), since they are now decoupled. But this could be avoided, if they had comments on their classes declarations: DO NOT MADE THIS PUBLIC!
4) Junior rebels: made all the classes public, deleting those comments. FP rules once again! The security system is now completely destroyed! Well, senior devs should suspect all is wrong: 'everything public' is the sum of all fears!
1
-
1
-
10:00, probably is skill vs number of people (average for up to 10). So that 20% are (0.3 / 0.05 = 6)x devs.
10:38, at chess, this graph not only is precise, but also it happens to so many people, that it's not absurd to take it as a "fortune teller" about someone who is going to learn chess.
11:53, I think it's about the same, but removing the vale of despair. This phase and before, probably happens on the very beginning. That's why it's missing here, which intends to take visions from professionals. So I think from junior to somewhat above it, the perception about software quality is optimistic, vs the pessimistic from a senior and beyond.
15:35, perfectly safe C is probably unbearable, due to UBs everywhere. But C/C++ working code, passing good tests (edge cases only/mostly) is achievable fast. Plus, there are awesome tools nowadays, even for UB. So, if development time is a thing, Rust won't be the best choice.
The problem is when a company abuses this fast pace productivity of C++, taking as "lost of time" automated tests, applying tools, writing your own tools. For these scenarios, Rust probably is better - not for the dev., but rather for the bosses: I mean Rust is the harness, boss is a stupid rushing horse .
23:13, C++03 had this, but split: words in the constructor, values here. It's better for dealing with legacy code, since it dismisses us from having to change the initializations.
1
-
1
-
31:02, I don't know if this is UB: '(pos) && (n = (pos)->next)'. Compilers use to be allowed to choose which portions of a cmd they want to solve 1st. So if pos == NULL, could it made 1st n = NULL->next?
32:04, just answer: what of those 2 lines would you prefer to write? Directly the big for (uglier and more prone to errors) or the short macro? 32:09, it's just the header of a for loop. There's no way to make that without a macro.
34:00, I stopped using goto like that, as a f() destructor, when I moved to C++.
34:21, as the same way as 1 adds { } to a for, for more than 1 cmd, he can add them here too. So it's not limited to 1 cmd.
35:15, it's more D.R.Y.-ed.
37:30, I use CamelCase for constants. I don't know, upper case seems "frozen" to me.
1
-
1
-
19:05, still about this code. Its returning value is inverted, considering that 0 is false, otherwise true, converted to int as 1. This is in the core of C/++. So this can (and will) cause bugs. I can imagine: if (test()), hoping that the test passed, when it got a NULL! And UB! Ok, sanitizer would get this fast. But let's not be bad programmers just because of that, shall we?
I know that, in ancient C, the 'main' f() got this absurd convention for some reason. And someone could say that this 'test' was made an "entry point", thus trying to follow main convention. But 1st, (at least) C++ has EXIT_SUCCESS/FAILURE, to let us forget about this. 2nd, I'll assume this was just a no-excuses mistake.
So, how to fix it? It's not possible to just exchange those values, since bugs would start to poping-up. If I would alone in this project, I would just create a C-enum, like:
enum test_ret { TEST_FAIL=0, TEST_PASS };
(The explicit 0 is due to 1 had once be the default. So I don't trust enum defaults) . The important thing is to tie the failure to 0 (false).
This would be enough, since I respect global constants. Not just because it's a C++ Core Guidelines rule, but also because I have personal experience about that. People underestimate literal numbers danger.
However, working in a team, it'd has people writing things like: if (test() == 0), and the enum would be implicitly converted to int, generating bugs, if nobody hunt those call and change them by hand. It's what I would do, after the enum.
If they were too many, risking the team write more of them than I could fix, I would change the enum to 'enum class'. It'd cancel the implicit conversions to int, causing compile errors. So people would be forced to see the enum class declaration, and its global constants - any IDE would open its file and location.
Even so, there would be people just taking a glance at it, thinking "Ah, some idiot changed it to enum class, thinking it'll make any difference" . So if I start to see many casts to int, like if (0 == (int) test()), the issue still would not be solved.
Then a more drastic solution should be taken. I would change the 'int' returning type of test to something not declared before:
CALLING_A_MEETING_TO_REASON_ABOUT_THE_STUPID_TEST_RETURNING_VALUES.
Compile errors popping up. The idea would be to stop the entire production line, making the new-feature-addicted boss freaking out, risking my job. But it should be made before this gets out of hand - some decision of not messing with working code. To get the boss hallucinating, could even put the time: MEETING_AT_10_30. He would appear sweating, pointing me a knife:
"Guess what? Nobody steals my job!"
"I don't give a crap about your sh##y job. I'm paid to defend the company goals, which are above you. So I'll keep that, until a get done with this sh##ness, and quit to wash dishes, which is a better job, thus paying more!"
1
-
26:00, I don't even remember when I had some serious issue, regarding to pointer indexing in C++. I wrote a class that inherits a container, and checks the index. I can turn that on/off anytime, for the entire project, by changing just 1 line!
It's also hard to get a trouble about pointer dereferencing, because everything is 'begin() + K, end() - N', valid range of containers. There's no space here for pointer issue. And when I send a nullable pointer to a f(), it uses to be null by default, which means I want to give the user the choice of sending it or not. In this context, it's obvious that I'll check the pointer soon. If it's a 'const char *' (for a string literal), and I want to earn some performance by not checking it, I set its default to "". So I don't have issues with pointer at all!
28:39, he meant that failing on those checks forced him to give up on pointers. These are much faster, because indexing always come all the way from the beginning. And failing to UB is not an option.
1
-
11:34, well, if in C 1 write a struct with data and pointers to f()s inside, I believe it'd has the same or better performance than C++'s. But there would be several disadvantages, that I think it would not be worth to be called class:
1) Everything public. This is a disaster for complex projects. It's already a crushing factor, 1 should not use.
2) Pointers could change, pointing to another f() with the same signature.
3) If a class can't hold from public its things, defeats the very propose of it: code security.
4) No inheritance, making things riskier by using composition.
5) No constructor, leaving the programmer to deal with dangerous C initializations.
6) No destructor. Despite nowadays C has a kind of "general destructor" for any kind of variable, it's up to the coder to call it right when the object is created. It's not automatic like a constructor.
1
-
1
-
1
-
13:30, I use Codeblocks: it's still an IDE, with plenty of modern features and key-shortcuts, while being middle-weight. I don't feel it slow.
15:00, I think you can say C++ (15:13) has a hostile behaviour, mostly due to some possible undefined behaviour, but not hostile design. It's pretty good at hidding complexity - better than any other language, I would dare to say. It also has compact syntax, which leads to more productivity. 19:27, speaking of that, I find C++ good even for simple/short projects. It doesn't force you to use any unnecessary complex tool.
1
-
10:10, Clang compiler has a tool to "trap" overflow and UB. So it'd halt this app. Case closed.
11:33, it's UB because "it doesn't make sense to do that". If 1 wants to 0 the var, just attrib. that value. Or if wants to rotate, which is a slow operation, use a 3rd party lib. for that, or write your own. C doesn't want to lose precious time checking that. And I think the lang. is right about that. This could be avoided by the discipline of using 'assert' f() call before any operation. C++ offers an even more interesting option: wrap the var into a class, making the assert calls implicit/automatic, meaning: even when you forget them! Remembering that even on C those assert calls execution can vanish by just #define-ing NDEBUG. So, for releasing version, they won't take performance.
16:14, hmm... delicious. Bookmarked to read later.
18:47, in C++ 1 can make a class, holding a pointer, which always automatically check to forbid it to be nullptr. It was even already deployed, on the GSL lib, to reinforce C++ Core Guidelines.
23:00, he's wrong because, despite the ptr_byte is "walking over the ptr mem", it's always taking the least byte from value. memset is not forcing him to use the least byte: he used that, instead of the entire value. The result was actually: memset (ptr, (unsigned char) value, num);
28:50, this may be C-only. I never saw this in C++.
1
-
@mikehodges841 TDD for small/tiny f()s can be good. Even so, often the dev. for a while is not sure about the f() signature. So I think a "hybrid" TDD is better: code the small f() till discover its signature; once having that, write the tests, and then complete the f(), which now will become much faster, due to "the blessing" of the tests. It means it can be completed with less thinking, saving energy for the long run.
However, in my experience, big f()s (2-5 screens) doing lots of things, either directly or by calling others, are hard to predict in a TDD way. And also have this editing issue.
Good thing C/C++ have a kind of tight syntax, making each test fill only 1 line. So it's easy to turn some of them on/off, when broken. By macro, they may not even be compiled too.
1
-
2:09, 1 of my biggest bugs in life came when I thought: "3 lines for these 4 ternaries... I guess I will wipe this into 2, elegantly" . I reviewed it by mind and... approved! Those lines held about 8 possible combinations. It happened that 1 or 2 of them were wrong, well disguised among the others. So those use to seldom happen.
To get things worse, there were 2 other parts of the code that were more suspicious of being guilt, so I took a while looking closely to them. Automated tests would had get that easily.
3:40, I guess there was code before and after that break. The problem is that in C/C++ 'break' jumps out from blocks of switch, for, while, do-while , but doesn't has this power over if/else ones, as the coder unconsciously thought at that specific moment. So the break was applied to the 1st block above those ifs: the switch, jumping over the processing of incoming message.
I once got a bug from this. I never wrote a tool for this 1, since it never was a recurring 1.
For this AT&T there were some solutions to replace the else-block, trying to not duplicate the code to where it should jump:
- Make it a f(). Bad design, since the rest of the project would see it, and may call it by accident. So boilerplate code should be added, to remediate this.
- Make it a macro f(). Despite I don't use to have problem with macros, I agree that it would be noisy/dirty code, depending on its size.
- Use a label after the END IF, to be reached via goto. Better, but this goto could still be called from any place from this case, at least.
- Lambda f(). I think this is the best 1: break would result in compile error, and return from any place would exit at the right spot. However, this was C, and neither C++ had lambdas at that time.
1
-
1
-
1
-
15:22, since class is about protecting data, I design them towards this. So there's no "obscure contract": the base class holds some data, and maybe some f()s, if they are pretty tied to it, technically. It's not a contract, it's an independent, ready to work class. A derived class should expand its data, or else should be reduced to just independent f()s, in most cases. This also brings the advantage to send just the base class, to f()s that only needs it, allowing a possible performance gain by coping it (if small), instead of referencing it.
16:30, the best way to do this in C++ is to avoid the abstract base class: put its methods on a macro, using composition on the derived ones. So the compiler will build the classes as concrete ones, not interfaces, gaining lots of performance.
26:10, this is pretty awkward. It should be just 1 class, and not an interface by default. And if many f()s should not be allowed to change users, almost everything here should be made nonpublic, allowing just a few f()s to change it.
1
-
1
-
1
-
1
-
7:25, I once made a nasty bug by refactoring 4 lines / 2 cmds to 1 line / 1 cmd. They were all made by ternary operators, nested or not. I mentally checked "every" possible case. It ended up being correct in 6 of 8 cases, as long as I remember. It was hard to catch, because 1) it was hard to made it appear (seldom seeing, but never gone), 2) it appeared after tons of things happened (reproducing its scenario could produce a false positive, regarding to its source) , 3) and I had a false lead/clue, which took a time to realize that. Hard to test, happen and promising false clues.
Automated tests would catch that right at its birth. But there's a question that doesn't want to silence: if automated tests are necessary for every bit of refactoring, would them in the end taking more time than catching a bug when it finally happen?
1
-
1
-
0:44, wait a minute, is there no error about it? Is it just a matter of "being stupid or not"?
10:27, I'm 2, because:
1) If I delete a '//' C/C++ line comment, I have to decide about indentation right at once. While with 3, after putting a // (lefting a blank space before the cmd), to later delete it, what will happen to those "3 spaces"? Will they turn into a tab or 3 spaces? Does it varies from code editors?
2) Since I use to write pretty horizontal code, tab 2 saves me more space for comments at the right side.
I believe tabs are better than spaces. However, if each 1 on the team has the freedom to chose their own tab config., spaces avoid a mess.
1
-
1
-
1
-
7:14, the only "unsafe design" about those is that, when the vector changes its size, it's not granted to stay at the same location in memory, so the iterators keep pointing to old address. 1 just needs to "refresh" them. But this need exists only 1 time per allocation (size changing). This is not made automatically due to possible performance. It's like web pages that are always refreshing themselves vs those waiting for the user to do it: the 1st is more comfortable, but wastes performance/resources from the machine.
The operator [] hasn't this issue, because it comes all the way from the beginning. But has a performance penalty. I personally use iterators intensively. I only had this issue once.
8:55, agree. This is awkward because, for every 1 of the millions of f()s, the code will has this amount of lines. The way I use to do this is to write a macro only 1 time, calling it everywhere:
#if NEWC
#define arrpass(type, name, dim) type name[..]
#elif C99
#define arrpass(type, name, dim) type name[dim]
#else
#define arrpass(type, name, dim) type *name
#endif
Then f()s will be written like (doesn't matter the standard):
extern void foo (const size_t dim, arrpass (char, a, dim));
1
-
1
-
5:57, performance uses to work in opposite way: the more promiscuity between types the better. And C is specialized on that.
6:46, if the C compiler doesn't receive optimization flags (or -O0), it will pretty much execute line after line. But all flags for that manage execution at better moments.
8:50, as long as I heard, this purity means its f()s don't have side effects. But this is ideal world. In practice, you'll have to write side effects out of f()s (to keep them "pure"), which looses even the precary encapsulation FP offers. It seems to me that only needs a more complex project to see this falling badly, compared to usual FP, in the same way that FP does compared to OO. No wonder why Haskell has the motto: "Fail at all costs!" .
12:00, that's why C++ was created, which seems to me to be the best language for crafting tools. I don't know in depth other languages, but I saw a presentation showing C++ as "the language with more functionality" (compared to D, Rust, Java/C#). So 1 can go much longer than linked lists - and in a safe way, if he creates his tools properly .
20:13, in C the type volatile were used to deal with that. But since 2011 C++ has a STL lib that forbids data race at compile time. It's not as easy to use as higher level languages, but it's a pretty improvement over ancient C-volatile approach.
1
-
1
-
2:53, this is so easily solved by OO...
5:50, you should has went to C++ instead. You would get a shorter/cleaner syntax and faster language, compared to awful Java.
For C, this can be solved by just creating a struct that carries its length:
struct ArrayWithLength {
int thearray[ARRAY_SIZE];
enum { size = ARRAY_SIZE };
};
But the company still has to write alternative f()s for all standard library, to check array size automatically, at each call. I recommend even to write an app to forbid the programmer to call unsafe libraries directly, by statically checking the code.
All of this is solved at once by just switching to C++. Its std::array has begin/end f()s, giving iterators for its limits, keeping the same syntax of any other container, throughout its entire standard library.
6:45, right at its 1st standard, C++ had a fully safe, modern, easy-to-use string class. It hasn't the \0 terminator problem, it keeps the size internally, it's compatible with all C libs and, for the user, since C string literals have implicit \0 in it, with std::string 1 can forget the terminator, even when expanding the string.
1
-
2:45, I once read a book by someone that used to code like that. It is pretty legible indeed. However, I don't code like that, because there's too much waste of time and energy by traveling vertically throughout the code. And there's also the worry about discapsulating things that, if not hold by a class or something alike, can then be called by the rest of the project, raising chances of bugs.
I prefer f()s that fit in 1 screen. They may be bigger, if there are more things that don't make any sense to be seen by the rest of the code.
6:07, I guess for small/tiny f()s, it's easy to know what tests 1 wants. And even if the code is from someone else and you don't understand it, if it has tests, it's possible to refactor it, even fast.
9:20, from my experience from timed work, I can say that often there are tiny interruptions. And if the programmer stops the clock at each of them, the resulted time is almost double. Examples: a) 2h -> ~3:30; b) 4h -> ~7:20.
It's not because I took too much time to go back. It's simply a matter of too many interruptions: a glimpse of an idea that you don't want to miss, someone talking to you, an uncertainty about the work, some stress, some feeling about hunger or thirst, a joke that you remember, and so on. It's inevitable. I guess that if someone makes a true effort to eliminate this, his stress will skyrocket. 10:12, I agree with you here, but it's not what happens, as I explained.
1
-
The author showed good skills, regarding clean and DRY code, automated style (including safety checkings) and knowledge of technicalities about C. But to me what really matters about senior or ace worker is the concern toward safety. He didn't mention this, at least not by words.
19:05, for instance, in this code I would point out some things:
- 1st, I would "rewrite everything in Rust"... Nah, it'd be in C++, which is indeed an improvement over C, not lefting anything behind. If the boss didn't agree: https://www.youtube.com/watch?v=O5Kqjvcvr7M&t=22s
- I would pay attention if linked list would be the best choice. It's only faster when there are too many insertions in the middle - that means a sorted list, somehow. Otherwise, a std::vector-like is much, much faster. For instance, if it's just a database, this sorted linked list would be slower than an unsorted vector-like, adding to the end, and removing in the middle by replacing it by the last. Or am I wrong?
- I would study the idea of changing that boss raw ptr to a unique_ptr or something higher-level: more elegant and safer.
- I would change that person::name, from C-array to std::string: more comfortable to work and almost no chance for UB, leading to cleaner code, since it'd require much less if-checkings by the user. But main advantage is that std::string is not a primary type (it's a class instead). So it's possible to later change it, by a user-defined faster container, keeping the same syntax to communicate to outside - no tons of time refactoring throughout the entire project . This would not be achievable with C-array, unless all its uses/calls where made via f() or macro - which nobody uses to do for it.
And would worry about that only if that std::string was a bottleneck, which is unlikely. But ok, let's imagine the worst scenario: it needs to be replaced by a fixed-size array, which uses to be 20% faster on the heap, only. Since it is not flexible as a std::string, does that mean it'd break its syntax, needing refactoring? Actually no, there's a turnaround: a tiny user-defined class, inheriting std::array (same speed as a C 1), and writing by hand all std::string specific functionalities, like += for concatenating. So all the work would stay inside the class.
In case of bigger name being assigned to 'name', an internal check would be made, as Prime pointed out. But not via assert, which would break the app - it could be 1 of those ongoing apps. Just an if, truncating the name to the size, writing an error to std::cerr.
But probably a fixed-size array could not be used: it has a limit of total memory per app. Since this code is making allocations, it suggests there are a huge number of persons. So it'd get a seg. fault. So std::string would be indeed the best choice.
1
-
1
-
0:10, I heard some bizarre things other languages do to inheritance. But C++ treats it nicely. I never got issues with this feature. For instance, that "diamond problem" doesn't exist (at least not by default).
2:50, I came from a C background. I even made my final college work in C. OO was a relief. Safer, easier, higher-level, better for crafting tools (key for improving the language) and also performatic, if the programmer knows what he is doing.
4:07, yeah, you are right: a simple variable can fit into that description, since it's indeed a "shared state of many previous operations" . But the key difference remains on the encapsulation provided by OO - at least in C++. FP can only offer global variables with a filter (setter). In C++ you can control who can even call its setters. This is a huge win! 4:17, but if it can't control who can call those f()s/data, its OO doesn't mean much.
12:53, interfaces are bad: start slow and eventually end up being bad design. See this talk: https://www.youtube.com/watch?v=aKLntZcp27M&t=720s
1
-
1
-
1:54, yeah, this is really good strategy. The only issue is when working with a team: others might not have your concern towards safety, prefering to rely on their intuition. So, it's a good idea to shrink the verifying code. 4:00, if possible, an interesting way of doing this would be to pack errors as flags of a bits mask: a unique number for each of these errors. When any of them happens, it'd be recorded as the positive Nth-bit. So the asserts would always look the same, something like: assert_stats (blablabla).
The f() would check all errors at once, since bits work in parallel, taking less than a machine cycle. Like (in C):
if (Mask & (DUMMY_CLIENT | EXPECTED_CONNECTION | COULD_NOT /*blabla*/) == 0) return true; // All ok. Otherwise, switch-like below, for error messages.
Only if an issue matches, it'd make a detailed switch-like process, with error messages.
So it'd "feel easy" for anyone to just call the same thing, not dealing with different err messages at each call, for instance. C macros could even hide this into the f()s beginning: instead of {, write BEGIN, which would put { and call assert_stats behind the scenes. Later it could be as easily disabled, by just editing the macro definition.
1
-
7:07, I think C++ fits in this article even better than C#. However, it requires the user to develop "some feelings", if he chooses to use default behavior/resources, instead of developing his own defensive tools. For instance, my 1st thought after certain action(s) are set to:
- Check the result of a f() or algorithm right away.
- If what matters is index where a pointer stopped after an algorithm, I get rid of that pointer at once.
- For things that I'm used too, I write as fast as I can (favouring productivity). When something starts to be unique, I proportionally start to get slower and more reflexive about (favouring defensiveness).
When things get complex or I'm failing often (for whatever reason), I stop everything to write a tool to lock the right behavior forever, going back to be faster/productive. So things go inside f() or classes (yes, including setters as nonpublic) , only 1/few way(s) to reach them, putting as many layers of protection as needed (thanks to C++ high functionality), whatever needed to reach productivity once again, because I value doing things without thinking twice, in crazy fast typing fashion.
So C++ fits in this article purpose of proportionality production, according to its complexity. An example:
I was working from picking values from a string, by pairs. I decided to use its default behaviors. I wrote fast, everything works predictably. No need for fancy tools nor languages. Then I decided to optimized it, by using a pointer that made 2 steps per cycle. It got expressively faster. It was also fast to develop, and worked flawlessly. So I left the computer, with my 15487th easy victory using C++.
But my gut feeling said to me that I wrote too fast something that I'm not used to. So, calmly drinking a coffee, I made a brief reflection about it. Mentally I discovered that the 1st step from the pointer was immediately checked, as I use to do, but not the 2nd. So it would step beyond array boundaries on the last 1, in some cases, whenever the f() didn't return before. Easy check, easy fix, I just added 1 line check for that.
1