Comments by "MrAbrazildo" (@MrAbrazildo) on "ThePrimeTime"
channel.
-
18
-
15
-
1:36, unit tests are important to me because:
- Test edge cases.
- Keep testing things that you think are already right, when you change code.
- In TDD, helps planing the design. This may be controversial, when 1 doesn't still know how the f() signature will be, and it's better to discover it by developing it.
- The most important: once having an incomplete f(), and a bunch of tests, the rest of development can now become really fast!
1:55, it's possible to extract that if condition from inside the f(), making another 1. Problem is that it may be seen by the rest of the project. C/C++ has a way to avoid that, while still keeping this flexibility.
11
-
7:08, in old hardware, the engine instructions/data didn't fit entirely on the cache. So, depending on how many instructions an action takes, CPU had to seek the RAM, which uses to be 100x slower (maybe less in a console). On modern hardware, all instructions/data are in the cache, which has much more memory than they require, for an old game. However, RAM is still used even nowadays, for multimedia stuff: images, video, audio, textures and other more than 64 KB sized. The optimization for these large things targets to load part of the RAM on the VRAM (GPU cache memory), in a moment the user doesn't care, like a loading scene - i.e. God of War's Kratos passing through some rocks. Sometimes this is used for loading from files to RAM too.
11:58, but he is doing it for modern hardware, isn't he? The video's goal is just to explain why Quake's alg. is not meant for all cases.
13:00, the sad truth is that these pointer transformations are UB (undefined behavour). That's why the guy commented it as "evil": he just wanted to get his job done, leaving the comment for the future masochist who will deal with the potential nasty bug. UB means the operation is not standardized. So, the app may someday start crashing or giving wrong values (out of nowhere!), if any thing change from the original setup: hardware, OS, any imaginable protocol that interacts to the game. Not even old C had an expected action for that, as long as I heard.
13:52, in math, a minus exponent means that the number is divided. So, x*0.5 == x / 2 == x*2^(-1). Instead of multiplying the whole number, it's possible to change its exponent, by sum or subtraction, which are faster operations.
11
-
9
-
8
-
4:00, to make a projectile travel alongside a 2D plane, angle is not required. It's only needed the "derived": how much Y (vertical) it'll run for each step of X (horizontal). Once having the target, its x,y will be acquired, and then comparing it to the original (your character) x,y: (Ydestiny - Yorigin) / (Xdestiny - Xorigin).
6:47, I would make it this 1st way, as a bigger map. The spawns would be coordinates related to the screen, so it would not get a problem regarding number overflow. "Node" is a dangerous word: it always feels to me that the programmer will use linked lists (slow as hell) where he shouldn't.
10:18, pitfall here: some people think that 100 will participate on the division, when it'll just multiply the result.
10:23, a bit harder? Dude, I played Van Helsing, and it was a lot harder. The harder the better. Don't be afraid to do it.
13:54, because it's hard to imagine a harder task.
7
-
6
-
6
-
3:18, you (23:54) and Casey said something that I already knew: wait the need before abstracting. This is the right way to design: focused on the practical needs. And that means building abstractions for defensive proposes too - before somebody starts to advocate FP.
3:35, 1 of the basic optimizations made by the machine is to inline f()s. The only concern is about reaching a limit, after which it won't inline or starts extra calculations about it - but 1 can set it. The actual problem about helper f()s is the options for calling them by mistake, or to feel the need to memorize them, as options, when thinking about the whole thing. To avoid this extra stress, a class to hide them can help. But this starts to require more boilerplate. The best solution is writing lambdas (within the f(), of course), if those blocks are called more than once. Otherwise, I keep them inside the f(), as explicit code, if they only make sense there.
5:03, if 2 apps have a difference of ms for each action, people will feel better with the faster 1. So, it's not just for speed-critical projects: even the common ones can benefit. Meta said that "people feel more engaged when the app is faster" (probably about ms and above).
5
-
4
-
4
-
0:03, I'll translate this: they don't use to spend time building tools. If default Rust has everything they need, ok, go for it. But don't complain later why:
- Windows 10 is 40% slower than Linux Mint 19.2, according to my tests.
- Loose too much time compiling.
- Missed out the chance to has easy-to-use tools, conciliating speed and safety.
2:16, this just means Rust has a better default behavior for safety. As long as I heard, Rust lacks the flexibility of C++. Thus, it ends up been an inferior approach, compared to C++ coded to work exactly how the user wants.
3:41, C++ does it better: it doesn't waste speed tracking object and also deletes it for you, automatically - and allows you to delete it by hand too, if you want .
4:10, this is the problem: converting C to Rust may look as a big win, but missed to opportunity to do it to C++. For instance, that means they won't ever has a switch of "verify my code now" / "now don't verify it" (for compiling speed).
4
-
0:42, if I would take care of this, I would start by rewriting this file to something much smaller. After all, 12GB is too much. Let's say names have average of 8 letters and numbers can go up to 99.9. So they are (8 + 1 (;) + 2 (2 numbers) + 1 (.) + 1 (1 number) + 1(new line char) )*8 = 14*8 = 112 bytes per line. So, writing in a binary file, 4 bits for the last fraction, 7 for the number and ~11 for a number simbolizing the name, which should be searched later in a separated table. So 4 + 7 + 11 = 22 bits, 3 bytes. If 3 is ~2,7% of 112 bytes, it means that those 12 GB would be reduced to 12x0.027 ~= 324 MB.
This would make any kind of search a lot faster.
4
-
4
-
3
-
0:00, interesting point you mentioned a startup. C++ is my favorite by far, but I don't know if I would trust it in the hands of a junior team. Maybe if there would be a long deadline ("done when it's done" mode) or if I would be watching close what they do (an almost "pair programming").
1:19, I often use inheritance, and have no issues with that. I barely can imagine its absence.
1:40, optional type is comfortable to code and understand, but it has a concerning drawback: it carries within a potential branch, which may lead to slower code. Each f() doesn't know if it's valid or not, leading to too many IFs, ugly code, redundant work and prone to slowness. It works like a 9-months pregnant woman, making everyone apprehensive.
3
-
3:29, C++17 was a conservative and kind small standard. C++20 changed the way we write code. It's pretty elegant, still having the old C approaches.
3:42, I don't know about Rust, but C++ keeps its compatibility with crude methods from C, so that you can still solve things in dumb way. So this already defeats the allegedly "complexity" issue. It's also utterly nonsense to talk about "stability issue" in C++. It continues having its performance and unmatched flexibility, newer features cooperating with old ones.
4:15, this is pretty stupid. OO improved safety a lot. Lambdas are often used, higher level e safer than anything C is using - and 0 cost too.
9:50, there are tools for C/C++ that catch those errors, faster than Rust would compile.
13:08, C++ has the only right OO implementation and is the most flexible language so far. Unmatched on those. Aside from that, it's capable of everything C does and implements higher level features near as comfortable as higher level languages do. So it seems more like a "master of most things".
13:14, unwise.
3
-
3
-
3
-
3
-
2
-
2
-
1:53, when I code, from those I prioritize, in this order:
1) Safe. And thus Encapsulated (necessary for safety), Neat and Tidy (from outside PoV), Noninvasive/Scaleable (natural for OO), Systematic (few public things). It hurts testability.
2) Performant. This 1 tends to hurt everything else.
3) Readable. After some adaptations, it uses to become Reusable/Understandable. If possible, with a deeper thought, it may also become (at least in its public interface) Simple, Elegant and maybe more Testable.
I don't know many languages, but I ensure C++ can achieve all of this. For instance, let's say a project works intensively with strings:
1) Safe: I use std::string or something alike. It's already pretty easy to use, but if I want some more automatic, I inherit it in my own custom class.
2) Performant: let's say it's hurting performance. I don't go to C's array of char. Instead, I hide in my own class what exactly the project demands.
3) Readable. After those 2 stop fighting each other, I use to make some changes towards readability/elegancy/simplicity. Right after, testability may be the target.
At the ending, it may not look as pretty as some Python-like solution, but giving all the things it's achieving at the same time, it's much better "for its proporse".
2
-
2:50, dump C++ for what? I want/need:
- Nonpolymorphic inheritance, instead of composition;
- Actual encapsulation, breakable only by a selected group of few f()s (meaning data is actually private, not indirectly public by allowing to use filters/modifiers, as C does) ;
- Full freedom to interact over a container. This way I decide the level of use constrains, by coding my own;
- Not to be forced to lose performance, because somebody else decided it for me - GC, for instance. In C++ I code things that I turn on/off security by clapping fingers. Does Rust has that? Can I turn its slow compile on/off anytime?
- Lots of things able to work hiddenly on backstage (several anytime-optional checkings, for instance), so that I can dev. powerful tools.
2
-
2
-
2
-
2
-
9:20, it has the int8-to-platform-size_t as the precise integer. But if that doesn't need to be precise, and even more, 1 wants to make the app "future proof", a long will target the platform size, so that it'll keep being "upgraded" as platform size grows. It can even become faster throughout time, if the app was thought to make binary operations on 1 variable, if that is faster. long long then it's what it seems, twice the size, if allowed. short for the smallest or at least smaller than the middle 1, int.
12:23, cringe moment: I don't know what's this May 9th.
15:04, not all platforms allow this double the size. So something like int128_t is meant for a compiler error if not supported. long long means "the maximum possible signed size". So, if not supported, it may be shrinked back to 64 bits, since the user didn't express necessarily 128 bits.
2
-
1:07, by that do you mean "data racing" (more than 1 thread writing the same data, at the same time) ? This is easily solved since C++11, with STL <atomic> library, at compile time. The remaining issue is the "false sharing": when you have different threads changing different memories from the same cache line. So when 1 write at its portion, it "freezes" the entire cache line, not allowing the other thread to write, during that brief moment. This is a performance issue, not a bug. It's still solved by hand, having to align the data, leaving each thread to its own cache line.
1:24, what exactly Rust solves here? Those pointers are meant to acquire an opened resource, freeing it later automatically. A common C++ skill issue here is to use those pointers for data that could easily fit the caches. Since people are used to call 'new' in other languages, in C++ it'll get that memory far away, on RAM or an even worse place, becoming at least 100x slower, unless the compiler saves the junior dev.
Why C++ made life harder on that? That's because it actually made life easier: it assumes 1 wants the data on cache, thus by default it dismisses us from even having to use 'new'.
1:55, I don't know about unique_ptr. But what I know and saw, more than 1x, is that compiler is smart enough to put an entire std::vector on a cache. Assuming unique_ptr is part of it, it's prone to be free too. But of course, it depends of the memory it's holding: if it exceeds the caches sizes, it'll stay on RAM. I think there's nothing Rust can do about it.
17:12, I thought he would say that C's pointers are the same concept from Assembly. Now I'm confused, since I don't deal with it for a long time. C++ iterators do some compile time checks, while pretty much the same speed.
2
-
2
-
2
-
2
-
12:49, I recently made a benchmark of unrolled loops vs normal on C++: the 1st got almost twice the speed of the conventional way. And due to macros, I made each unrolled algorithm in 2 lines (not even f()s were compiled), vs ~6-10 lines f()s for known loops. 12:55, the reason is because the unrolled code makes it clear, for the compiler, the chance to use parallelism from a special hardware for basic operations. Follow this talk: www.youtube.com/watch?v=o4-CwDo2zpg&t=2100
13:05, to C++ I use to code macros for doing things on a safer way, and then I cut them out at once, by changing 1 line and recompiling. Could Rust do the same, by activating/deactivating its safe mode, maybe using its macros?
2
-
2
-
2
-
2
-
@0ia I think that app complexity is something much more challenging than its size. It's possible to be challenged by some concept if the project demands it right away, regardless of its size. For instance, global variables. It's a known bad design choice, but if 1 only works in large projects that don't use that for decisions throughout the code (which branches it'll take), like databases, the exposure of those variables won't be felt as a dangerous thing, as it should. But coding a game for instance, even if not large, which uses to has tons of branches, taken accordingly to values from those variables, then the programmer will be thrown in a really unbearable mess.
2
-
2
-
42:58, you are right, but you are comparing to C. In C++, just make_unique: it'll even dismiss you from having to write another line, as this defer (also avoiding the leak of not calling it) , because its destructor will free the mem., when the time comes (reaches the end of its scope).
43:54, wrong! C instead of C++ is. His use of macro, to help the limitations of C, was well applied. I used to use it a lot, in the past. C doesn't has defer-like, as long as I know.
2
-
2
-
2
-
2
-
2
-
1:38, I used to dislike #ifdefs. Nowadays, I think they are quite nice, because they help to debug. For instance, if a block of code won't be used in some compilation, that code won't actually exist, even raising a compile error in case some piece is missing. So this is already a check, a confrontation vs what the coder is thinking. And it's possible to keep switching the switches, getting a quick statistic effect about any bug.
Codeblocks IDE can "blur" blocks not targeted to compile, a pretty nice visual effect.
4:19, I agree with you, because people use to think that Single Responsibility Principle is technically only 1 thing, but I think it may be semantically, instead of technically. So a f() may do several small/tiny technical things, to achieve 1 goal. This way, outside the f(), the rest of the project can look at that f(), thinking about that goal only. It's already an isolated functionality, despite the fact it takes more actions internally.
4:31, I completely disagree here. I already wrote tons of words in a video of his, uploaded by other channel. If someone is interested, I may write some words here too.
6:28, sorry, dude, we are waiting for Carbon. If only it changes that bad syntax...
14:35, I think this is much more important than it looks. I can't prove it, but I feel like spending more energy when travelling vertically. So this should be avoided, whenever convenient.
18:02, I personally omit { }, because I love compact code. But I wouldn't argue vs this standard. I would put them on the same line, though. 18:21, in C/C++ the if isn't ruined by comment, even without { }.
1
-
34:00, I'm not following the point here. What I heard about this happens when 2 pointers may -> to the same address. It's not the case of int and double. I'm not have been following C, but speaking for C++, this is accepted, even with the same type. The issue is that C++ might lose performance:
void f(int *x, int *y) { // They may -> to same address.
*y = 3; // It might be indirectly making *x = 3 at the same time too!
*x = *x + 2;
}
It loses performance: each time 1 of these pointers appear, it must seek the pointed address, instead of working with their value on CPU registers!
Easy portable fix: if they are not const, copy their values to local variables, and bingo: ~12x faster!
Nonportable fix: compilers have different attributes, holding/pairing those parameters.
Clang compiler has a tool to report if it's losing this performance. 1 can go to compilerexplorer.org(?), put the f() there, select Clang with -OX, and there's an option I don't remember where, which will say "variable clubbered by" something, when it couldn't optimize.
However, I think this is an issue only when compiling part of the code. Otherwise, the compiler should be able to track all the pointer steps, knowing for sure if 2 of them point to the same place.
1
-
1
-
1
-
1:50, those 2 are meant for big fat resources only. But there's a lot of memory accessing in which you can earn performance through pointers. Some days ago, I was optimizing a small project, and tried a bunch of algorithms, for passing data from string to an array. Using pointers (iterators actually) was the fastest, making the whole app 10x faster than using the slowest, sscanf() from C.
7:30, in C/C++ it's not undefined behaviour, as long as I know. It only will loose performance. There's a tool from Clang compiler that shows where/when it looses that in the code.
8:55, he said that this pointer usage was an attempt to turn around borrow checker rules.
9:25, undefined behaviour is the sum of all fears in coding, because it doesn't grant to obey rules anymore, from that point. It means that, for instance, if you test your code, it won't execute it, when you read an array, it may read after its end, and things like that. So your project will crash sometimes and work on other tries!
11:30, to take away an argument in C/C++ macros, it's needed to redefine it:
#define ptr__field (*ptr).field
But gonna need to do that for the entire class (all fields), multiplied by each different pointer name!
1
-
1
-
1
-
11:22, "Dream Job" doesn't exist. If it is 'dream', you'll have a low wage, for what you deliver. If it's a "cashy" 1, then it's good only for you, crazy 1. All jobs suck hard. The only exception is if 1 can do something that nobody else can. But even so, the dude will earn due to "monopoly" (of a talent), not because of all his technical skills. 22:49, indeed.
27:21, my code is spaghetti. If it's readable or not, it's a matter of taste. Computer likes it.
30:07, brazilian meme.
1
-
1
-
1
-
1
-
5:31, I don't know what is so bad about C++, like these kind of people use to say. It has all those types you mentioned, like classes that are not defined at compile time (interfaces), "simple structs with methods" (classes), foreach as algorithm and in the language core too, optional types, and so on.
7:15, C++ has the optional type, for a type that may be valid. But has a better solution, if 1 adds the GSL library, the NotNullPtr class (or some name alike), providing Zig's not nullable. It's also possible to develop your own pointer like that, and it doesn't take much longer.
7:49, so there's no C++'s namespace on those languages, huh? It works like a surname for a library. Each 1 has its own, so name conflicts never happen. It's also possible to dismiss yourself from typing that all the time, if you are sure it won't conflict.
8:51, copied from C++, which also has them as default parameter, meaning that 1 doesn't need to explicitly send them on initialization. 9:00, and if "you forget to clean things up", it'll do that for you, no messages needed. 10:05, it means 1 doesn't even need to do a deallocation.
11:20, people are contradictory: they love the "error or variable", for a variable, but at the same time they are afraid of "NULL or a pointer", for a pointer! What's the logic?!
18:57, yes, undefined means it'll initialize that memory taking the "trash values" left in there, from other variables previously freed (no longer exist). C/C++ has this by default (faster initialization), meaning 1 doesn't need to lose time typing ' = undefined'.
23:26, actually it's much better, because it's supposed to has several public f()s, only asking you to type 'public' keyword 1 time only!
23:43, 1) f() is not attributing something necessarily, so there's no need for the = operator. 2) f() doesn't know what the user will do with the variable (changing it, for instance). That's why it doesn't use to be 'const', although it's possible in C++, just not recommended. 3) Specifying the returning type improves compilation time. In my experience, it's better/safer to declare it automatic (fn, var, auto, depending on the language) during development, and switching it to its explicit type, after finished.
27:08, C++ is smoother, dismissing you from typing usize, deducting sum and the returning type as int, due to its default for integer literals.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
3:30, abstractions are the best thing, but can also turn back against the dev. For C++, I take a FP approach by default, until some variable can cause too much damage, if changed wrongly or from a wrong place. Then it goes to a class, to control everything about it. I 1st start with free things, then tie down some critical things - "decoupling" is not welcomed for those cases. So my code has many more free f()s than classes.
Complexity is not inside only 1 f(). If it's certain that a bug is inside 1 f(), it's just a matter of (short) time to be solved, doesn't matter how complex that f() is. It's like a lion trapped in a cage: just study it, and tame the issue.
The nightmare happens when 1 needs to travel throughout the project f()s, searching where it might had started. This is the main reason to write classes to restrict who can change critical data.
Let's say someone is coding a football (soccer) game. It could has a class for ball, players/actors. To coordinate when a goal is made, and its consequences, changing variables in more than 1 class, I use to have a class to tie those things together. It could be called referee. So public Referee::verify_and_change_if_goal would be the only or few f()s allowed to call private f()s Ball::goal_restart (to put ball in the middle of the field) and Player::goal_restart (to put players into their half of field, in certain locations, with some random variance towards that location, to seems more realistic, less robotic) .
So that Referee public f() can change the world, from any point where its object appears. Bad design! Actually, no. The verifications would be made inside Referee (lion in the cage), only changing variables in case of goal. So doesn't matter if it's called several times, even by mistake: the worst possible thing is to loose some performance; it won't ever bug the game. It doesn't even matter if the code grows up to 1 billion LoC: those things will continue locked.
But let's imagine the worst scenario: some internal error happened inside this chain of calls, and junior dev decided to shortcut it, creating his own way to change variables after the goal:
1) He would get compile errors because, let's say, the main f() who calls the Referee public f(), and now is calling junior's, is not 'friend' of those classes. Junior turnaround it:
2) made the main f() friend of all those classes, so that he can write his own way. On the next error, some senior will see the class definition, and think: "Wait: why main is friend?!" . But let's make it more complex. Instead of that, junior:
3) pulled Ball::goal_restart and Player::goal_restart to public area. A senior may think those were always public. This is awkward, because some error might happen, by calling 1 f() and not the other (i.e: Ball's but not Player's), since they are now decoupled. But this could be avoided, if they had comments on their classes declarations: DO NOT MADE THIS PUBLIC!
4) Junior rebels: made all the classes public, deleting those comments. FP rules once again! The security system is now completely destroyed! Well, senior devs should suspect all is wrong: 'everything public' is the sum of all fears!
1
-
1
-
10:00, probably is skill vs number of people (average for up to 10). So that 20% are (0.3 / 0.05 = 6)x devs.
10:38, at chess, this graph not only is precise, but also it happens to so many people, that it's not absurd to take it as a "fortune teller" about someone who is going to learn chess.
11:53, I think it's about the same, but removing the vale of despair. This phase and before, probably happens on the very beginning. That's why it's missing here, which intends to take visions from professionals. So I think from junior to somewhat above it, the perception about software quality is optimistic, vs the pessimistic from a senior and beyond.
15:35, perfectly safe C is probably unbearable, due to UBs everywhere. But C/C++ working code, passing good tests (edge cases only/mostly) is achievable fast. Plus, there are awesome tools nowadays, even for UB. So, if development time is a thing, Rust won't be the best choice.
The problem is when a company abuses this fast pace productivity of C++, taking as "lost of time" automated tests, applying tools, writing your own tools. For these scenarios, Rust probably is better - not for the dev., but rather for the bosses: I mean Rust is the harness, boss is a stupid rushing horse .
23:13, C++03 had this, but split: words in the constructor, values here. It's better for dealing with legacy code, since it dismisses us from having to change the initializations.
1
-
1
-
31:02, I don't know if this is UB: '(pos) && (n = (pos)->next)'. Compilers use to be allowed to choose which portions of a cmd they want to solve 1st. So if pos == NULL, could it made 1st n = NULL->next?
32:04, just answer: what of those 2 lines would you prefer to write? Directly the big for (uglier and more prone to errors) or the short macro? 32:09, it's just the header of a for loop. There's no way to make that without a macro.
34:00, I stopped using goto like that, as a f() destructor, when I moved to C++.
34:21, as the same way as 1 adds { } to a for, for more than 1 cmd, he can add them here too. So it's not limited to 1 cmd.
35:15, it's more D.R.Y.-ed.
37:30, I use CamelCase for constants. I don't know, upper case seems "frozen" to me.
1
-
1
-
19:05, still about this code. Its returning value is inverted, considering that 0 is false, otherwise true, converted to int as 1. This is in the core of C/++. So this can (and will) cause bugs. I can imagine: if (test()), hoping that the test passed, when it got a NULL! And UB! Ok, sanitizer would get this fast. But let's not be bad programmers just because of that, shall we?
I know that, in ancient C, the 'main' f() got this absurd convention for some reason. And someone could say that this 'test' was made an "entry point", thus trying to follow main convention. But 1st, (at least) C++ has EXIT_SUCCESS/FAILURE, to let us forget about this. 2nd, I'll assume this was just a no-excuses mistake.
So, how to fix it? It's not possible to just exchange those values, since bugs would start to poping-up. If I would alone in this project, I would just create a C-enum, like:
enum test_ret { TEST_FAIL=0, TEST_PASS };
(The explicit 0 is due to 1 had once be the default. So I don't trust enum defaults) . The important thing is to tie the failure to 0 (false).
This would be enough, since I respect global constants. Not just because it's a C++ Core Guidelines rule, but also because I have personal experience about that. People underestimate literal numbers danger.
However, working in a team, it'd has people writing things like: if (test() == 0), and the enum would be implicitly converted to int, generating bugs, if nobody hunt those call and change them by hand. It's what I would do, after the enum.
If they were too many, risking the team write more of them than I could fix, I would change the enum to 'enum class'. It'd cancel the implicit conversions to int, causing compile errors. So people would be forced to see the enum class declaration, and its global constants - any IDE would open its file and location.
Even so, there would be people just taking a glance at it, thinking "Ah, some idiot changed it to enum class, thinking it'll make any difference" . So if I start to see many casts to int, like if (0 == (int) test()), the issue still would not be solved.
Then a more drastic solution should be taken. I would change the 'int' returning type of test to something not declared before:
CALLING_A_MEETING_TO_REASON_ABOUT_THE_STUPID_TEST_RETURNING_VALUES.
Compile errors popping up. The idea would be to stop the entire production line, making the new-feature-addicted boss freaking out, risking my job. But it should be made before this gets out of hand - some decision of not messing with working code. To get the boss hallucinating, could even put the time: MEETING_AT_10_30. He would appear sweating, pointing me a knife:
"Guess what? Nobody steals my job!"
"I don't give a crap about your sh##y job. I'm paid to defend the company goals, which are above you. So I'll keep that, until a get done with this sh##ness, and quit to wash dishes, which is a better job, thus paying more!"
1
-
26:00, I don't even remember when I had some serious issue, regarding to pointer indexing in C++. I wrote a class that inherits a container, and checks the index. I can turn that on/off anytime, for the entire project, by changing just 1 line!
It's also hard to get a trouble about pointer dereferencing, because everything is 'begin() + K, end() - N', valid range of containers. There's no space here for pointer issue. And when I send a nullable pointer to a f(), it uses to be null by default, which means I want to give the user the choice of sending it or not. In this context, it's obvious that I'll check the pointer soon. If it's a 'const char *' (for a string literal), and I want to earn some performance by not checking it, I set its default to "". So I don't have issues with pointer at all!
28:39, he meant that failing on those checks forced him to give up on pointers. These are much faster, because indexing always come all the way from the beginning. And failing to UB is not an option.
1
-
11:34, well, if in C 1 write a struct with data and pointers to f()s inside, I believe it'd has the same or better performance than C++'s. But there would be several disadvantages, that I think it would not be worth to be called class:
1) Everything public. This is a disaster for complex projects. It's already a crushing factor, 1 should not use.
2) Pointers could change, pointing to another f() with the same signature.
3) If a class can't hold from public its things, defeats the very propose of it: code security.
4) No inheritance, making things riskier by using composition.
5) No constructor, leaving the programmer to deal with dangerous C initializations.
6) No destructor. Despite nowadays C has a kind of "general destructor" for any kind of variable, it's up to the coder to call it right when the object is created. It's not automatic like a constructor.
1
-
1
-
1
-
13:30, I use Codeblocks: it's still an IDE, with plenty of modern features and key-shortcuts, while being middle-weight. I don't feel it slow.
15:00, I think you can say C++ (15:13) has a hostile behaviour, mostly due to some possible undefined behaviour, but not hostile design. It's pretty good at hidding complexity - better than any other language, I would dare to say. It also has compact syntax, which leads to more productivity. 19:27, speaking of that, I find C++ good even for simple/short projects. It doesn't force you to use any unnecessary complex tool.
1
-
10:10, Clang compiler has a tool to "trap" overflow and UB. So it'd halt this app. Case closed.
11:33, it's UB because "it doesn't make sense to do that". If 1 wants to 0 the var, just attrib. that value. Or if wants to rotate, which is a slow operation, use a 3rd party lib. for that, or write your own. C doesn't want to lose precious time checking that. And I think the lang. is right about that. This could be avoided by the discipline of using 'assert' f() call before any operation. C++ offers an even more interesting option: wrap the var into a class, making the assert calls implicit/automatic, meaning: even when you forget them! Remembering that even on C those assert calls execution can vanish by just #define-ing NDEBUG. So, for releasing version, they won't take performance.
16:14, hmm... delicious. Bookmarked to read later.
18:47, in C++ 1 can make a class, holding a pointer, which always automatically check to forbid it to be nullptr. It was even already deployed, on the GSL lib, to reinforce C++ Core Guidelines.
23:00, he's wrong because, despite the ptr_byte is "walking over the ptr mem", it's always taking the least byte from value. memset is not forcing him to use the least byte: he used that, instead of the entire value. The result was actually: memset (ptr, (unsigned char) value, num);
28:50, this may be C-only. I never saw this in C++.
1
-
@mikehodges841 TDD for small/tiny f()s can be good. Even so, often the dev. for a while is not sure about the f() signature. So I think a "hybrid" TDD is better: code the small f() till discover its signature; once having that, write the tests, and then complete the f(), which now will become much faster, due to "the blessing" of the tests. It means it can be completed with less thinking, saving energy for the long run.
However, in my experience, big f()s (2-5 screens) doing lots of things, either directly or by calling others, are hard to predict in a TDD way. And also have this editing issue.
Good thing C/C++ have a kind of tight syntax, making each test fill only 1 line. So it's easy to turn some of them on/off, when broken. By macro, they may not even be compiled too.
1
-
2:09, 1 of my biggest bugs in life came when I thought: "3 lines for these 4 ternaries... I guess I will wipe this into 2, elegantly" . I reviewed it by mind and... approved! Those lines held about 8 possible combinations. It happened that 1 or 2 of them were wrong, well disguised among the others. So those use to seldom happen.
To get things worse, there were 2 other parts of the code that were more suspicious of being guilt, so I took a while looking closely to them. Automated tests would had get that easily.
3:40, I guess there was code before and after that break. The problem is that in C/C++ 'break' jumps out from blocks of switch, for, while, do-while , but doesn't has this power over if/else ones, as the coder unconsciously thought at that specific moment. So the break was applied to the 1st block above those ifs: the switch, jumping over the processing of incoming message.
I once got a bug from this. I never wrote a tool for this 1, since it never was a recurring 1.
For this AT&T there were some solutions to replace the else-block, trying to not duplicate the code to where it should jump:
- Make it a f(). Bad design, since the rest of the project would see it, and may call it by accident. So boilerplate code should be added, to remediate this.
- Make it a macro f(). Despite I don't use to have problem with macros, I agree that it would be noisy/dirty code, depending on its size.
- Use a label after the END IF, to be reached via goto. Better, but this goto could still be called from any place from this case, at least.
- Lambda f(). I think this is the best 1: break would result in compile error, and return from any place would exit at the right spot. However, this was C, and neither C++ had lambdas at that time.
1
-
1
-
1
-
15:22, since class is about protecting data, I design them towards this. So there's no "obscure contract": the base class holds some data, and maybe some f()s, if they are pretty tied to it, technically. It's not a contract, it's an independent, ready to work class. A derived class should expand its data, or else should be reduced to just independent f()s, in most cases. This also brings the advantage to send just the base class, to f()s that only needs it, allowing a possible performance gain by coping it (if small), instead of referencing it.
16:30, the best way to do this in C++ is to avoid the abstract base class: put its methods on a macro, using composition on the derived ones. So the compiler will build the classes as concrete ones, not interfaces, gaining lots of performance.
26:10, this is pretty awkward. It should be just 1 class, and not an interface by default. And if many f()s should not be allowed to change users, almost everything here should be made nonpublic, allowing just a few f()s to change it.
1
-
1
-
1
-
1
-
7:25, I once made a nasty bug by refactoring 4 lines / 2 cmds to 1 line / 1 cmd. They were all made by ternary operators, nested or not. I mentally checked "every" possible case. It ended up being correct in 6 of 8 cases, as long as I remember. It was hard to catch, because 1) it was hard to made it appear (seldom seeing, but never gone), 2) it appeared after tons of things happened (reproducing its scenario could produce a false positive, regarding to its source) , 3) and I had a false lead/clue, which took a time to realize that. Hard to test, happen and promising false clues.
Automated tests would catch that right at its birth. But there's a question that doesn't want to silence: if automated tests are necessary for every bit of refactoring, would them in the end taking more time than catching a bug when it finally happen?
1
-
1
-
0:44, wait a minute, is there no error about it? Is it just a matter of "being stupid or not"?
10:27, I'm 2, because:
1) If I delete a '//' C/C++ line comment, I have to decide about indentation right at once. While with 3, after putting a // (lefting a blank space before the cmd), to later delete it, what will happen to those "3 spaces"? Will they turn into a tab or 3 spaces? Does it varies from code editors?
2) Since I use to write pretty horizontal code, tab 2 saves me more space for comments at the right side.
I believe tabs are better than spaces. However, if each 1 on the team has the freedom to chose their own tab config., spaces avoid a mess.
1
-
1
-
1
-
7:14, the only "unsafe design" about those is that, when the vector changes its size, it's not granted to stay at the same location in memory, so the iterators keep pointing to old address. 1 just needs to "refresh" them. But this need exists only 1 time per allocation (size changing). This is not made automatically due to possible performance. It's like web pages that are always refreshing themselves vs those waiting for the user to do it: the 1st is more comfortable, but wastes performance/resources from the machine.
The operator [] hasn't this issue, because it comes all the way from the beginning. But has a performance penalty. I personally use iterators intensively. I only had this issue once.
8:55, agree. This is awkward because, for every 1 of the millions of f()s, the code will has this amount of lines. The way I use to do this is to write a macro only 1 time, calling it everywhere:
#if NEWC
#define arrpass(type, name, dim) type name[..]
#elif C99
#define arrpass(type, name, dim) type name[dim]
#else
#define arrpass(type, name, dim) type *name
#endif
Then f()s will be written like (doesn't matter the standard):
extern void foo (const size_t dim, arrpass (char, a, dim));
1
-
1
-
5:57, performance uses to work in opposite way: the more promiscuity between types the better. And C is specialized on that.
6:46, if the C compiler doesn't receive optimization flags (or -O0), it will pretty much execute line after line. But all flags for that manage execution at better moments.
8:50, as long as I heard, this purity means its f()s don't have side effects. But this is ideal world. In practice, you'll have to write side effects out of f()s (to keep them "pure"), which looses even the precary encapsulation FP offers. It seems to me that only needs a more complex project to see this falling badly, compared to usual FP, in the same way that FP does compared to OO. No wonder why Haskell has the motto: "Fail at all costs!" .
12:00, that's why C++ was created, which seems to me to be the best language for crafting tools. I don't know in depth other languages, but I saw a presentation showing C++ as "the language with more functionality" (compared to D, Rust, Java/C#). So 1 can go much longer than linked lists - and in a safe way, if he creates his tools properly .
20:13, in C the type volatile were used to deal with that. But since 2011 C++ has a STL lib that forbids data race at compile time. It's not as easy to use as higher level languages, but it's a pretty improvement over ancient C-volatile approach.
1
-
1
-
2:53, this is so easily solved by OO...
5:50, you should has went to C++ instead. You would get a shorter/cleaner syntax and faster language, compared to awful Java.
For C, this can be solved by just creating a struct that carries its length:
struct ArrayWithLength {
int thearray[ARRAY_SIZE];
enum { size = ARRAY_SIZE };
};
But the company still has to write alternative f()s for all standard library, to check array size automatically, at each call. I recommend even to write an app to forbid the programmer to call unsafe libraries directly, by statically checking the code.
All of this is solved at once by just switching to C++. Its std::array has begin/end f()s, giving iterators for its limits, keeping the same syntax of any other container, throughout its entire standard library.
6:45, right at its 1st standard, C++ had a fully safe, modern, easy-to-use string class. It hasn't the \0 terminator problem, it keeps the size internally, it's compatible with all C libs and, for the user, since C string literals have implicit \0 in it, with std::string 1 can forget the terminator, even when expanding the string.
1
-
2:45, I once read a book by someone that used to code like that. It is pretty legible indeed. However, I don't code like that, because there's too much waste of time and energy by traveling vertically throughout the code. And there's also the worry about discapsulating things that, if not hold by a class or something alike, can then be called by the rest of the project, raising chances of bugs.
I prefer f()s that fit in 1 screen. They may be bigger, if there are more things that don't make any sense to be seen by the rest of the code.
6:07, I guess for small/tiny f()s, it's easy to know what tests 1 wants. And even if the code is from someone else and you don't understand it, if it has tests, it's possible to refactor it, even fast.
9:20, from my experience from timed work, I can say that often there are tiny interruptions. And if the programmer stops the clock at each of them, the resulted time is almost double. Examples: a) 2h -> ~3:30; b) 4h -> ~7:20.
It's not because I took too much time to go back. It's simply a matter of too many interruptions: a glimpse of an idea that you don't want to miss, someone talking to you, an uncertainty about the work, some stress, some feeling about hunger or thirst, a joke that you remember, and so on. It's inevitable. I guess that if someone makes a true effort to eliminate this, his stress will skyrocket. 10:12, I agree with you here, but it's not what happens, as I explained.
1
-
The author showed good skills, regarding clean and DRY code, automated style (including safety checkings) and knowledge of technicalities about C. But to me what really matters about senior or ace worker is the concern toward safety. He didn't mention this, at least not by words.
19:05, for instance, in this code I would point out some things:
- 1st, I would "rewrite everything in Rust"... Nah, it'd be in C++, which is indeed an improvement over C, not lefting anything behind. If the boss didn't agree: https://www.youtube.com/watch?v=O5Kqjvcvr7M&t=22s
- I would pay attention if linked list would be the best choice. It's only faster when there are too many insertions in the middle - that means a sorted list, somehow. Otherwise, a std::vector-like is much, much faster. For instance, if it's just a database, this sorted linked list would be slower than an unsorted vector-like, adding to the end, and removing in the middle by replacing it by the last. Or am I wrong?
- I would study the idea of changing that boss raw ptr to a unique_ptr or something higher-level: more elegant and safer.
- I would change that person::name, from C-array to std::string: more comfortable to work and almost no chance for UB, leading to cleaner code, since it'd require much less if-checkings by the user. But main advantage is that std::string is not a primary type (it's a class instead). So it's possible to later change it, by a user-defined faster container, keeping the same syntax to communicate to outside - no tons of time refactoring throughout the entire project . This would not be achievable with C-array, unless all its uses/calls where made via f() or macro - which nobody uses to do for it.
And would worry about that only if that std::string was a bottleneck, which is unlikely. But ok, let's imagine the worst scenario: it needs to be replaced by a fixed-size array, which uses to be 20% faster on the heap, only. Since it is not flexible as a std::string, does that mean it'd break its syntax, needing refactoring? Actually no, there's a turnaround: a tiny user-defined class, inheriting std::array (same speed as a C 1), and writing by hand all std::string specific functionalities, like += for concatenating. So all the work would stay inside the class.
In case of bigger name being assigned to 'name', an internal check would be made, as Prime pointed out. But not via assert, which would break the app - it could be 1 of those ongoing apps. Just an if, truncating the name to the size, writing an error to std::cerr.
But probably a fixed-size array could not be used: it has a limit of total memory per app. Since this code is making allocations, it suggests there are a huge number of persons. So it'd get a seg. fault. So std::string would be indeed the best choice.
1
-
1
-
0:10, I heard some bizarre things other languages do to inheritance. But C++ treats it nicely. I never got issues with this feature. For instance, that "diamond problem" doesn't exist (at least not by default).
2:50, I came from a C background. I even made my final college work in C. OO was a relief. Safer, easier, higher-level, better for crafting tools (key for improving the language) and also performatic, if the programmer knows what he is doing.
4:07, yeah, you are right: a simple variable can fit into that description, since it's indeed a "shared state of many previous operations" . But the key difference remains on the encapsulation provided by OO - at least in C++. FP can only offer global variables with a filter (setter). In C++ you can control who can even call its setters. This is a huge win! 4:17, but if it can't control who can call those f()s/data, its OO doesn't mean much.
12:53, interfaces are bad: start slow and eventually end up being bad design. See this talk: https://www.youtube.com/watch?v=aKLntZcp27M&t=720s
1
-
1
-
1:54, yeah, this is really good strategy. The only issue is when working with a team: others might not have your concern towards safety, prefering to rely on their intuition. So, it's a good idea to shrink the verifying code. 4:00, if possible, an interesting way of doing this would be to pack errors as flags of a bits mask: a unique number for each of these errors. When any of them happens, it'd be recorded as the positive Nth-bit. So the asserts would always look the same, something like: assert_stats (blablabla).
The f() would check all errors at once, since bits work in parallel, taking less than a machine cycle. Like (in C):
if (Mask & (DUMMY_CLIENT | EXPECTED_CONNECTION | COULD_NOT /*blabla*/) == 0) return true; // All ok. Otherwise, switch-like below, for error messages.
Only if an issue matches, it'd make a detailed switch-like process, with error messages.
So it'd "feel easy" for anyone to just call the same thing, not dealing with different err messages at each call, for instance. C macros could even hide this into the f()s beginning: instead of {, write BEGIN, which would put { and call assert_stats behind the scenes. Later it could be as easily disabled, by just editing the macro definition.
1
-
7:07, I think C++ fits in this article even better than C#. However, it requires the user to develop "some feelings", if he chooses to use default behavior/resources, instead of developing his own defensive tools. For instance, my 1st thought after certain action(s) are set to:
- Check the result of a f() or algorithm right away.
- If what matters is index where a pointer stopped after an algorithm, I get rid of that pointer at once.
- For things that I'm used too, I write as fast as I can (favouring productivity). When something starts to be unique, I proportionally start to get slower and more reflexive about (favouring defensiveness).
When things get complex or I'm failing often (for whatever reason), I stop everything to write a tool to lock the right behavior forever, going back to be faster/productive. So things go inside f() or classes (yes, including setters as nonpublic) , only 1/few way(s) to reach them, putting as many layers of protection as needed (thanks to C++ high functionality), whatever needed to reach productivity once again, because I value doing things without thinking twice, in crazy fast typing fashion.
So C++ fits in this article purpose of proportionality production, according to its complexity. An example:
I was working from picking values from a string, by pairs. I decided to use its default behaviors. I wrote fast, everything works predictably. No need for fancy tools nor languages. Then I decided to optimized it, by using a pointer that made 2 steps per cycle. It got expressively faster. It was also fast to develop, and worked flawlessly. So I left the computer, with my 15487th easy victory using C++.
But my gut feeling said to me that I wrote too fast something that I'm not used to. So, calmly drinking a coffee, I made a brief reflection about it. Mentally I discovered that the 1st step from the pointer was immediately checked, as I use to do, but not the 2nd. So it would step beyond array boundaries on the last 1, in some cases, whenever the f() didn't return before. Easy check, easy fix, I just added 1 line check for that.
1
-
1
-
7:12, this guy missed the point that companies wet dream is to hire juniors only. This is a goal, but hasn't been possible so far. It's predictable that some of them would eventually ask, for its top programmers, to develop a language/library to help noobs.
14:06, you should, because otherwise it's prone to erros: similar blocks of code put your guard down, while a bug is hidden between them. But there are some caveats for that:
1) Copy-paste is good practice, due to saving energy, which will be important on the long run. I even timed that once: it took me 20h to create 1 bug due to that, meanwhile I saved energy many times.
1 just needs to be a bit more careful when doing that, compared to normal code. And soon being ready to make the number 2:
2) A common code for all repeated block. Not with ifs between them, which leads to bad performance, due to branches. Just the actual common code should be extracted.
14:36, nice statement. Is it yours? I heard that for performance, but now that you said that, it seems more real for abstractions.
22:28, if a language lacks encapsulation, it's over to me.
1
-
12:00, the standard is conservative: keeping a language with "all features" is already a lot of things. And since it aims to have backwards compatibility as much as possible, it's desirable to keep off things that seems to be "born to be external" tools.
I guess this view of not breaking things along the way allowed the language to exist, with success, for so long time.
12:14, chat: its syntax is a war crime. I find it to be quite clever: it's productive, since you type 1 thing, and it's likely to be correct - the context will tell exactly what it is. ie: you don't type defaults, saving energy - what would be an error, it's allowed under a certain context. I don't know how many languages have 'var' and 'let', but this last 1 for constant is stupid: it should not exist (const as default).
12:36, it could be worse: forced to use identation.
19:19, you can only be joking. With a bit of effort, C++ jumps to a kind of high level language. 19:55, for instance, when 1 makes an algorithm traverse from [begin; end), the pointers are always trapped. I never got a memory seg fault using that! Ok, only 1 time I fell for the "invalid pointers", when the memory was reallocated. But never again! And it was an easy to catch bug.
1
-
39:53, it's comparing the addresses they are pointing to, which should be y's, if x and y are in this sequence.
41:50, 0x7fff'ffff + 100 is UB, invading the bit for the sign area. The right way to write that check would be: assert (0x7fff'ffff - 100 >= a); And I guess the standard <stdint.c> lib has INT_MAX (or alike) constant for that max value, which should be used instead.
Plus, there's an UB on main too: not returning an int, when promised that. However, there's a compiler flag to catch that.
46:00, it's broken, because an UB was created BEFORE even the check for UB was complete! 48:35, precisely. 53:49, I disagree, because it's inflicting the compiler's freedom. Let's say a compiler deploys signed 32 bits represented as 31 bits, with the signal being kept somewhere else. This unnecessary way of checking might create unnecessary problems.
1
-
1
-
If this engine is for game, don't do it. Instead, make the game directly, only what is needed for it. Whenever something becomes generic enough, put it in a general lib of your own. In far future, if this ever becomes robust, only then you make an engine out of it.
I strongly recommend C++, or at least its classes and containers (from STL), to properly hide data. Game is a wild environment, eager to steal your data, which needs to be tightly secured.
1
-
@anon_y_mousse Games have a loop, having to keep values memorizes. So, goodbye to pure f()s ideal world, for instance. There are plenty of side-effects, often. It also takes important decisions, like which branch to go, according to those values.
So it's needed to avoid some unwanted f() to "steal" some data (to have access to it, when it shouldn't) to change it. If everything is public (global variables, for instance), lots of things will be changed at wrong moments, either by mistake or by a "4am hacking thought of design" - a "theft of data", compared to good design.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
7:36, I'm huge fan of macro. I think it even should be expanded, being more powerful. But of course, if something can be made without a macro, go for it. For C++, typedef (nowadays "using") provide alias for types, lambda works as private f() within a f() (something that I want for long). All of that work as encapsulaters for technical stuff, letting us to think the code in a more high level way. Macro can do this, even joining pieces of code that wouldn't make sense otherwise.
For those that don't like the plethora of things like that that I write, I just say: their definitions are always either in a) Right above the current f(), b) Above the current file, along with its global constants, c) In a global place of the project files. That will depend on the broad of its meaning.
On IDEs like Codeblocks, just leave the mouse over the alias/macro, and it'll show its meaning. Or right-click over it -> Go to its declaration, after a recommended Ctrl-B, to mark the current place to go back later, with Alt-PgDowns, if the declaration is in the same file.
1
-
29:24, I agree, but I would never rewrite STL because of that. My f()s and classes use to not be generic, whenever possible. For things coming from STL, I use typedefs.
30:15, this is a Java thing. Well coded C++ use friend keyword, to allow just a few f()s to access nonpublic data.
31:43, std::stringstream is just a higher level scanf. And faster, according to a measure I took. I would only argue against it if performance was at stake on it: pointer, for instance, is much faster. It's syntax is not ugly to me. And std::to_string does the trick, if this is the only reason of using this stream.
32:25, 1 line f() could fit inside the class definition. And using std::clog, to not be generic, would dismiss receiving ostream and another class too. Result:
auto show_val ( ) {return std::clog << val;}
Plus, even using the overload, it could be made via:
std::clog << "blablabla" << object.get_val();
I think the code in the video is beautiful, if 1 desires what it offers: throw strictly the object (not 1 of its members) to an object that is or inherited a std::ostream. What is stupid and ugly is to write a f() like that (in the video), when 1 would be satisfied with the already existing 'std::ostream::operator << (int)'.
And printf is better only when several values are being read at once - otherwise it's less productive, due to type specifying ( + warnings) and demanding more typing on keyboard. So their thesis of condemning streams fell flat.
35:25, I have a tolerance about 1 line f() definition (below its header) inside class definition, because I can still put { } on the same line. For more than this, if { } is used normally, it starts to push the code downwards, looking noisy to me.
I also try to align returns, f() names, 'this' specifier, and f() definition, whenever they are "attached" (1 right below the other). Of course, I don't put these many stupid unnecessary spaces between the "fields". (35:39, giggles)
About the horizontal look, it's ideal for eyes, since they are widescreen. Code is ideally meant to be looked by eyes, not with help of hands, unnecessarily travelling vertically.
1
-
36:20, I think dismissing 'get', just 'length', is much better because it works accordingly to the Polymorphic Principle: "1 interface, multiple methods". The object (the "interface") is giving the context, so length refers to it (as well as the other f()s/"methods") - its size, in this context. Code is also assuming a default here: whenever the action is not specified, it means just the word (it's value, probably). I believe assuming defaults is a good idea, taking advantage of typing less, speeding up communications, saving energy, and so on.
Btw, 'size' is a much better name, because it's shorter, more broadly used (by STL, at least) than 'length', which confuses foreigners about th vs ht.
The only downfall for a direct name is that 1 probably has a private variable with the same name. Some people use a '_' suffix, which is kind acceptable, but not a tradition yet. So the solution may look a bit annoying.
38:09, good thing showing the footnotes. You should do this in all videos.
41:13, no, damn it! It's helping you to fix the BS you were about to do!
PS: Diablo 1 has the better source code I saw from games. It avoids vertical code most of the time, it's simple at all. However, I spot 2 flaws: literal numbers and other that I don't remember right now.
1
-
19:05, I forgot to mention 1 more thing: I would make person a class, instead of a struct. Compiler would fetch me a list (as errors) of all f()s who interact with the class.
Then I would copy their signatures, putting them inside the public area of the class, as 'friend's. Now, whenever some error happens due to wrong value for any member of that class, anyone would know EXACTLY where to look 1st. Awesome feature! It'd also avoid to change member data accidentally, on other unauthorized f(). Whenever that happens, the programmer would be invited to rethink the design. In my experience, this results in better design and less bugs.
To avoid make unnecessary friends, f()s who only read fields from the class, would not be friend. They would make the read via public getters I would write - but no setters.
1
-
1
-
1:28, to avoid forget to close (, {, [, I kept the option of automatic closing them, right after I opened them. The same for return instruction: whenever I start writing the f() header, I put the return right away. Other solution is to declare the returning type (at left of the header) as auto.
1:30, I only use C-array when it's const an already initialized with values, that I'll access through enums. At this case only, it provides advantage over C++'s, due to shorter notation. Otherwise, I always use this last 1.
2:11, this includes building defensive tools, to make it safer, far from its default.
3:20, I'm learning to use MQL5 for finance, and they use a C++ inspired language, more defensive by default. I also heard about some people using Java, to make often changes with less risk. But I heard too, in a presentation, that "C++ is the language of choice on this subject" .
5:50, once 1 get used to those, they become easy to manage. If bug happens, it's no more hard to find.
8:53, I heard C++ is starting to replace C on there too.
11:47, since I don't like to configure compilers to attach them to code editors, I only recently got the complete C++17. In Linux, I finally get C++20. Android is barely at C++14, at least with SDL dialog to Java JNI.
The good news is that, to become massively more productive, just C++11 is needed, and, for a few blasting features, C++14. C++17 (and I guess C++23 too) is weak, but for C++20 it's said that it's the new "changed our way of coding" .
12:57, it abstracts the low level by default, it's middle-level, and can jump easily to high level, if 1 develops his own classes, working exactly the way he wants and the project demands.
13:03, all of this from C++11. Lambdas are nice: I just type [ ( { (, and end up with [ ]( ){ }( ), completed by the IDE, which is the hard part. Or I can just type lbd + Crtl-J, Codeblocks will expand this according to an abbreviation I previously wrote, that could be like that 1 in the video. To avoid conflicts vs the capture, just type [&], and it'll capture everything as a mutable reference, also dismissing having to receive f() arguments.
1
-
14:08, there's a way to get rid of all those if checks, safe and easy that even C can handle: get a file only to handle the nodes chain/tree. There, some private content will handle the control over the nodes. For public access, only public setters. (This is 1 of the cases this FP approach can enjoy the same safety level from OO. It only evens because those setters are supposed to be called from anywhere) . So the use would be like:
create_node(); // Let's assume it failed. Optionally, an err msg could go to some log output.
goto_next_node(); // It automatically checks the next 1's validity, thus doing nothing. Another err msg to log.
int a = read_var_A();
// Validity check is made here too. Since there wasn't a "next node", it'd return variable from the current 1. But since the list is empty (automatically checked too), a literal value would return. Log should report all of this.
goto_previous_node(); // There's none, so it'd not go anywhere.
So, this is pretty safe. Log could even get better, by using a trick I saw in a Eskil Steenberg's video: each of these f()s could be a macro call to their actual versions. i.e.:
create_node_ (__FILE__, __LINE__); // It'd call this behind the scene.
By reporting the current (at the calling moment) _FILE_ and _LINE_ to log, it'd not matter how large the project would be, the exact location of the error or missing explicit check by the user would instantly be known, making debugging tremendously easy.
The price is that, despite the code would be clean of explicit checkings, the generated bytecode would has lots of implicit checkings, leading to much more branches, thus slower code. But this is easily fixed: once the app is safe, follow log's instructions about locations, adding explicit ones. Once no more of those err msgs appear, just change 1 macro line, which controls whether or not the implicit checkings are compiled. And then recompile the whole thing.
1
-
6:33, hehe, Codeblocks is like that: a mid-term, not as fast as Vim, hasn't all its vertical moves, nor all InteliJ features. But I like to think it has the best from both: it's fast and light enough, doesn't require previous study, just install and run, opens in some seconds (ok for me), doesn't erase things I wrote, has plenty of IDE famous features, and I can also edit them. I move fast on it. It's what really matters to me.
1
-
1
-
4:13, this is strange. If it's maintainable, it means above all things that it hasn't UB (thanks mostly to the high level language you are using) and that it's not changing variables on wrong places as well right places too, in different f()s (well structured).
But if it's good on this, it should be prone to be readable too. If I would risk a guess, I would say some variables are been changing on wrong places only, damaging the meaning of things (strategic view). 5:43, this may endorse what I'm saying: the goals of each f() were properly defined: they are mixed. Some f()s are doing job of other, when they shouldn't. 6:05, my "giant" f()s use to be 5 screen sized. They have a preparation of data (that only makes sense if used internally) to be used later, so this takes space. Strategically, it's easy to see: preparation 1st (2-3 screens, let's say), a main loop later, processing them. I keep things inside the f() due to encapsulation: I don't want the rest of the project having direct access to that functionality, since it would not make sense.
But if the f() was long enough, making me start to forget what was done earlier, even strategically, then I would start chopping it. I would make a class, having the f() as public, and several other f()s private, as its internal content. No way I would let it with 200 lines, if I would start to loose understanding over it.
6:23, the problem with FP is that it's too optimistic about its safety. For example, to work with multimedia I like to use SDL2. It's pure FP. So I get some of those things, that no alien should mess with, and I put in classes. So I think in case of confrontation, OO should force FP to adapt, because it brings issues to code safety.
1
-
1
-
3:15, I can't, there's C++.
7:25, it's better to separate resource/muiltimidia from the logic the commands them. Because resources can be large, staying on the RAM/heap memory, while the logic goes to CPU caches, becoming much faster. I use to let a class hold images, for instance. Whenever wants to draw, it's send a number/code, meaning a request for some specific image. This also avoids unnecessary duplicating resources, since they will stay in a single instance class. I also avoid making unnecessary classes. I only create them to defend critic data, since this is already built in the C++ core.
8:30, the same thing is applied to situations where an object is about to be unnecessarily duplicated: keep things in 1 object, which should receive request to access/modify its content.
15:00, these are not good examples for classes, they could be just f()s. Drawing and saving are actions, classes are mostly for data. Class is recommended for when data must travel safely along the battlefield. Safely means only changeble by authorized f()s.
1
-
1
-
10:54, I agree about the spaces, because they actually aid on visualization. But it's better to not put { } whenever possible, due to "clean code": less typing, faster compilation, saves energy, visually better. The only drawback came with macros (C++ example):
#define my_pairXY(fA, fB) int x = fA(); int y = fB() // Dumb code. It should be int x = fA(), y = fB()
for (blablabla) my_pairXY (getX, getY);
And then y slipped away from the for! But everything has a price to pay. By my experience, it's worth.
12:29, that's my dream. I'm forced to be C++ lawyer, because modern languages are too dumb to take its several good ideas and improve over it. So, with that mindset, nothing better seems to be appearing for the next... decades? I guess I'll take a deep look into Lisp.
13:07, probably means 'obese'.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
6:00, yeah, choosing (what's likely to be) a correct data structure should be the starting point. And this is 1 of contiguous memory, like std::vector in C++. Because a mistake here will take too much refactoring later. On the other hand, algorithm is something that, even if is entirely wrong, 1 will has to change mostly inside 1 f(), not scattered parts throughout the project.
9:05, OO is so flexible, that 1 can inherit that class, and implement his version of some functionalities, not changing it.
10:29, I guess "Dave's Garage" (ex-MS) channel revealed who the guy was, in his series about this algorithm.
11:11, precisely. I made a small game that fit entirely in L1 cache (64 KB for the logic, not resources), after compressed in bits several variables. I spend less than 10% of developing time optimizing it, which led it to 30x faster. And even so, I wrote classes and abstractions (not interfaces) all over the place. It's maintainable, easy to catch bugs, even after not seen it in a while. And it's readable, due to enough abstractions.
25:20, there's a quote which says "Who works with a hammer tends to see the world as a nail" . He wants to say "use the right tool for the right job" .
1
-
1
-
1
-
1
-
0:59, no fun? It's awesome!
10:33, I watched a presentation in which some dude said his team failed badly with Haskell. And if we add to that the Haskell motto, "Fail at all cost!" , we can assume many had tried... and failed badly!
I can imagine why: there's an obsession about pure f()s. The real world needs don't embrace that, most of the time. So 1 has to update variables outside the already precarious encapsulation of a f(), to fulfill the side-effect needs, so avoided by the language. Thus, everything becomes periculous. The disaster is just a matter of time.
12:50, what does that mean? Once you used htmx, you just use it, without an opinion? The perfect pragmatic tool?
1
-
@digitalspecter If variables should not be created, everything could be pure f()s. But there are reasons why things once obtained should be memorized: either performance (expensive to get them again), tied 1 to another (need to be changed together), and so on.
Depending on the project complexity, there's not much space for pure f()s. i.e. if you turn the head of character in a videogame, changing his angle of view, even if not changing anything in the game, variables x,y,z need to change in some place, and memorized, because they'll be asked later.
Now imagine if you split the f(), which calculate changes, to make it pure, letting the variables attribution for another place. So we have the calculating and the changing f()s:
Pros:
- Code become more conceptual, where more actions have a name.
- 1 can shine in Haskell community, by making more elegant code.
Cons:
- Looses encapsulation: things that could better fit in 1 f() are now scattered in 2 or more f()s.
- Harder to debug: more places to look. It's needed to look the production of the value (pure f()) and also the attribution. They should be 1.
- Even worse: the rest of the project can now see and call those new f()s (btw, C++ allows the user to forbid that) .
I'm not advocating to have less f()s. But create them just to feed the Haskell utopian idea to has a "pure project" is bad, dangerous, even prone to disaster!
1
-
3:20, it's political, because the language is being condemned by things it almost got rid of. So he's saying that they should implement those safety measures as frameworks, instead of the already existing external tools. Because otherwise people go to Rust and alikes, and once other codebases grow with those languages, there won't be place for C++, outside its giant legacy codebases. It'll be the new COBOL! An undeserved destiny for the best language!
8:07, it looks contradictory: the language always was made unsafe (for performance) and always aimed safety, as a long term goal. It has a conservative way to see things: it wants to evolve, without sacrificing things on the way. I guess that vision is correct, for its ambitious goals. And having this in mind, the language has indeed evolved quite beautifully, in my opinion.
10:22, that's because he was wrong. People are too enthusiastic over Rust, at the point of misjudging C++.
11:31, Clang compiler launched a 'modernize' tool, that rewrites old working code to new standards.
1
-
22:36, C++11 also had for range loops: for (myType &value: vector_of_type). But I kept using the old for, through a macro like: myfor (it, vector_of_type), because I felt counter-productive to have to specify the type used by the container. I only embraced for range loops in C++14: for (auto &value: vector_of_type), using auto to kill that faulty feature.
25:02, I disagree, because these things are keeping themselves generic enough to work with any type. And everything is separated by scopes, that's why so many :: operators. C++ even has a kind of dry syntax, compared to how many things it's handling on those libs. 1 has to compare beauty to what it's trying to achieve.
27:00, macro can clean this code. While the lib has it in its generic form, to hold all possible user configs (and I think it's beautiful, because it achieves that, with likely minimal syntax) , the user don't need to do that. If it happens to has several arguments, just make a macro or typedef about its meaning:
#define ParamsForHashDecl typename KeyType, typename ValType, // ... the rest.
#define ParamsForHash KeyType, ValType, // ... all the rest.
using ProjectsOnlyHash = HashTable <ParamsForHash>; // Alias.
Since this is made only 1x, similar f()s headers would be like:
template <ParamsForHashDecl>
const ValType &ProjectsOnlyHash::getValue (const KeyType key) const;
getValue is a function from the class 'ProjectsOnlyHash' (1 doesn't even need to read its template args) , that receives a KeyType (that can't be changed) and returns a reference of some ValType, which can't be changed either. The f() also can't modify data from the ProjectsOnlyHash class, except those declared as 'mutable'.
At any time throughout the project, if the user wants to remember what is ProjectsOnlyHash, or its template parameters, just leave the mouse over the respective word.
1
-
14:09, this is good, not an issue, because it'd result in less refactoring, if changed, leading to more productivity.
14:20, 1st of all, this isn't real-life example. std::vector is a pretty fast data structure, that auto-manages its size internally. The compiler is so smart in optimizing it, that I often see it rivaling fixed size arrays.
14:47, I use msg or msn, to keep same letters as in portuguese.
15:00, what are you using? Vim? (kidding) :goodvibes:. On modern IDEs, Codeblocks for instance:
a) 1 can just leave the mouse over the f() name call (cursor not required), and 1s later it shows the f() signature.
b) Right click (on f() name) -> Go to declaration: it opens its header file, right at the f() signature.
c) If nothing of that worked (mostly due to not filling a cache yet) , add an absurd parameter on the f() call, and recompile. Compiler will say: this is stupid, look the f() signature: it opens a small log window from below, from which you can already see its signature, or left click on it, to open its file.
d) Write a tool: got_message could say what it changes, whenever called:
#ifdef SAY_WHAT_YOU_FUNC_CHANGES_WHENEVER_USER_DEFINED_THIS_MACRO
printf ("got_message changes size");
#endif
Then, on a click of a finger, 1 can disable this on the entire project, by just changing 1 line and recompiling it.
16:07, true, but I think the team should agree on defaults, before even start the project.
1
-
0:13, C++11 is the "must see" 1. From that, only minor good features have been made. C++20 is now a game-changer, but it's more related to high level features than performance or middle-to-low level functionalities.
0:38, oh... really nice to know that.
2:51, this is what I believe C++ is able to achieve. But that's not its default behavior. 1 should go (sometimes struggle) for it.
7:43, maybe because of its complexity, it feels solid among the complexity of a project. I was making 1 in 2 steps, the 2nd "as a superset" of the 1st. I basically inherited all the 1st step as basic classes for the 2nd. It worked amazingly! Of course I had to amplify things, but no extra problem was added by that. And the classes were organized by data, not so much by meaning. Even so, the whole previous meaning worked for the 2nd, and no performance penalty was added, beyond what was inevitable from more work to do.
9:40, delete all of them, letting the compiler pointing you the missing ones.
1
-
2:48, they just gave a name for a thing that has been made in C for decades:
struct CheeseBurger { const char *ing_1, *ing_2, *ing_3 };
#define createCheeseBurger CheeseBurger { "bun", "cheese", "beef-patty" }
And then call it by:
CheeseBurger a_cheese_burger = createCheeseBurger;
5:20, this is a "lesser version" of C++ for 2 reasons:
a) If 1 just wants to customize the initialization, C/C++ way is much better, as exposed above, due to compact syntax.
b) If the goal is to customize it along the way, not only on initialization, it should has a control of which f() is allowed do that - and it seems that only C++ has this feature . Otherwise, all sorts of bugs can come from it, because everything will be public, which is the worst nightmare, depending on the project complexity.
10:57, why use an interface for that? Couldn't it be just a class? 16:00, again, why an interface? This thing is awkwardly slow!
21:00, and that's why f() programming < OO: the 1st lacks good facades. In C, you can let that memory management "encapsulated" away in a different file. But there are issues:
a) Each file like this will be dedicated to each object, which hurts scalability.
b) If only 1 generic file like this is created, then the user will has to keep traveling alongside with that array, which is exposed to dumb mistakes for long unnecessary time.
It's still possible to create several macros, trying to hide that array exposure, but it's still a precary solution.
1
-
1
-
1
-
1
-
1
-
8:35, I agree. But I don't know if debugger is overkill. I use to make unity tests. Maybe some prints. This solves +95% of everything. If bug persists, which is rare, I also use a technique I created, called "hacking the solution": I change the code a little bit, test, see results. Then put things back, repeating the process in a different way. This puzzle points me to the right direction.
10:02, I do that too. I think TDD is a bit invasive, when I'm developing the f() signature: I still don't know exactly what it should receive/return, so I want a bit of a freedom. As soon as this is established, I write the tests. Once both are made, the rest of the f() development/fix can reach a pretty fast speed, as it becomes oriented by the tests.
10:10, but I never delete test, unless it can be replaced by 1 that tests what it intended to, in a more edge case way. C/C++ also allow conditional compilations, including or not the tests. So their presence can be configured by changing just 1 line. 17:27, the same thing happens with all those asserts: if #define NDEBUG 1x before, all of them suddenly disappear. So the programmer is not condemned to their presence.
17:53, and compilers evolved too. I saw more than once std::vector (variable length size array) being faster than a fixed size 1!
19:30, it's possible to write tests that just emit reports/logs, showing errors, but not shutting down things.
20:18, I'm too. But I'm 2 workplaces on Linux because, when in development environment, I don't want other minimized windows annoying me, from the rest workplace. I also use the Cube, to give a nice effect when switching between them.
24:40, 1 of the reasons why I use Codeblocks IDE is that because, either on Windows or Linux, I just install (1-2 minutes), pass my pre-configured archive (some minutes maximum), and I'm already coding, with everything I want.
1
-
1:10, does anybody can explain to me what's the logic in calling enum a "sum type"? What's a sum type btw? 1:26, isn't std::tuple enough? 1:53, it's more comfortable than structs, due to possible accessing members via indexation. But I don't use it, because it generates too much assembly code, which leads to slower code.
2:00, are you sure? I know that acquiring/freeing resource are, and this is the main (and should be the only) goal for smart pointers. But raw pointers/iterators are pretty fast for memory accessing. All STL algorithms demand to use them (a few times as references).
4:58, but I'm, and I say it's the best language for crafting tools (more functionality/freedom). Whenever I face something risky around the corner, I build a tool to deal with that.
6:50, you can put declarations and definitions on the same header. I do this for small projects.
7:25, there's no issue about the private data being on the header file. It'll continue being forbidden for public access, unless otherwise expressed.
8:55, I rarely forget to type const. But I agree that const by default is useful. However, it's possible to create a convention for the team:
1) Create some prefix/suffix meaning const for the type, like 'int_'. 2) Do the same for 'mut'. 3) Config. to highlight them in the code editor. 4) Config. to NOT highlight the conventional types in the code editor. This way, the team will notice, at once, that something is wrong when they type 'int', and it doesn't highlight.
10:20, I think const_cast is always a mistake. There's an optimization the compiler does, by exchanging all const by literals, since the beginning, which might colide to that. Better is to go right away to the f(), fixing its const-less issue.
1
-
10:45, I guess the maintainers just opted for keeping the non-const iterator because it's a clue, of what the iterator will do inside the f().
11:53, "Move was a mistake. It should not be standardized. The compiler can see the allocators" - said a compiler maintainer. It can manage the movable things for us.
12:20, std::function is not mandatory. I would never use a runtime exception just to have it. 1 can use pointer to f(). Its only disadvantage over that class is its kind of annoying type syntax declaration, that can get worse. However, there are some turnarounds:
a) auto function = my_func_name; // By omitting the ( ), it's already a pointer to f() .
b) If receiving it in a template f(), doesn't even need to know what the type is: just pass the f() name, when calling the template f().
c) If it's not a template, and needs to know the type to declare it, just type another thing, not convertible: the compiler will halt, saying the type it deducted. Then copy/paste it right from the log.
12:42, as I said before, write a tool. For instance, why not put a printf on copy constructor? It'd warn you.
And non-primitive types can be send by copy. 1) Compiler may arrange things for you. 2) The size is more relevant than the type. For instance, I once had a 1-byte class. It had its f()s, but only 1-byte data. Passing it by copy on the entire project wasn't slower.
13:25, I don't think these are valid issues. It's just a call to a f(), when returning. If that's a concern, 1 can keep those values in a struct, dismissing calling f()s at returning. These "out parameters" may have a performance cost.
1
-
1
-
1
-
1:36, I think that would better fit as definition of productivity. I think maintainability is more related to not mess what was already done, whenever you need to change part of it, either by making new feature, refactoring, upgrading, optimizing, exchanging strategy, and so on.
1:53, I don't have this issue. I may forget a bit about how things communicate between themselves, but I get the idea from some minutes to some hours. I may also change the style a little bit.
What's the secret for this? I don't know exactly, but I keep the code commented in every tiny thing suspected to cause problems in the future. And I keep documentation about the project as a whole, outside the code too, for both strategy and tactic approaches.
3:35, this shouldn't be happening. Despite it's impossible to know how the entire project works tactically (the exact variables and how they change) , it's viable to have both the strategy (how it works at all, high level) of overall data structs, how they communicate and their goals; and the tactics inside each f(). For this last 1, I use to write its goal, comments in almost every line and even 1 or 2 edge examples, in case of a complex f(). I discover this need by making a brief "brainstorm" about it, mainly focusing on the future readability.
This is 1 of the key reasons why I have a "spaghetti" style of horizontal code, putting comments at its right side. People take this as ugly and unreadable at 1st glance. But I think it's compact (most of my f()s can be seen in 1 screen) , encapsulated (no unnecessary extra f()s are made, just for the sake of been "readable") , and commented enough for the future, without pushing the code downwards (which I think it's awful and damages readability) .
1
-
4:55, I don't use to have much problems with types. In games, they use to be way different from each other. For instance, if you mem. some data in a class, it will sooner than later be packed into bits. If you put some other object, when that 1 was expected, a compiler error will arise.
6:47, games are too wild to not use OO. They require proper encapsulation, in a level that FP can't provide.
13:34, I think a better design is to have several containers, 1 for each type. So 1 could has 1 for the white-hat guys, other for black-hat dudes, other for things, and so on. And those holding just data control, small stuff on stack, 100x faster. Only for multimedia, really large data, those smart-pointers would apply.
So another container on the stacks holds inf. for composing the scene: the index and container of things present in a certain moment. So data for other containers will be reached via indexes, avoiding the pointer invalidation problem - and also being much faster, since most data were computed on the stack.
14:17, sure. But doing this it'll be horrendously slow. The idea is to avoid the reallocation, for at least 90% of the time. So std::vector::reserve preallocates enough memory for that.
24:44, I heard that zealling feelings attract them.
26:18, it's possible to create a macro for both - so that both are changed together.
1
-
1
-
1
-
1
-
0:10, ooh, this is awkward! The right way is to have a variable in a class, and just ask for it, not loosing performance nor getting complicated, trying to guess what it is. It's even better if, instead of a variable, this constant is given as a template argument, by a f().
3:49, this is a long discussion, but I think quite the opposite. Horizontal code means 1 will spend less time traveling throughout the code vertically, which spends more energy, and looses a bit of focus. Plus, eyes are widescreen, they were made to see things horizontally. And also reading code is an eye thing, not a hands 1. I use tab = 2, to put more things on the same line.
1
-
0:08, hm... I don't know. I use to fly with C++. Unless Rust dismisses tests, which is not true, I have some doubts...
C++ can be felt as slower when typing declarations vs languages that dismiss them, or when those language have an already existing lib/algo to solve things at once. These are the 2 most common cases I can remember now. Other than that, C++ dismisses writing defensive stuff (this should not take too long) and has a concise syntax, that can be even more using macros. Running the app with some inputs known to have certain results, it soon becomes clear if a mistake appears. So stop and fix it. For experts, just by the looks of the bug (from the report), there's ~90% chances to be pointed to the right track of it - yes, almost all bugs are pretty fast to catch, even with manual tests, let alone automatic ones.
3:03, agree. I use only FP, until I find that some variable would potentially create a mess, if changed on the wrong place. Only then I start to use class, to both protect and control that var. This is fast to type (not bloated with unnecessary classes) and easy to test (mostly f()s). And even if an environment of classes state needs to be tested, through macros (plus C++ concise syntax declarations) it's possible to compress wonderfully an entire situation (several classes) in 1 automated test, maybe in a single line, if lucky! And that conciseness encourages to write more tests.
1
-
8:56, true. I think my spaghetti code is clean, readable and kind beautiful. But it's just me.
11:53, performance uses to fight vs all others. So if a code got kind ugly, on an apparent necessary extent to deal with performance, I don't take it as ugly. I think a code should be judged having its goals in mind. Example: Rust or C++ specifiers, to get data fit into safety constraints, is not ugly.
17:20, I think they are bad when they expose content that should be "private" of a bigger f(). Otherwise, I think it's good to not pass 1 screen. Spaghetti style helps with that.
18:14, using IDEs, just leave mouse over the abstraction, and it shows what the abstraction means, how it was declared at least (enough for macros, lambdas, types).
21:13, well, if I had the chance to refactor it, I use to be satisfied for years.
22:59, spaghetti code is the right tool for the job. :elbowcough:
1
-
1
-
1