Comments by "MrAbrazildo" (@MrAbrazildo) on "ThePrimeTime"
channel.
-
18
-
15
-
7:08, in old hardware, the engine instructions/data didn't fit entirely on the cache. So, depending on how many instructions an action takes, CPU had to seek the RAM, which uses to be 100x slower (maybe less in a console). On modern hardware, all instructions/data are in the cache, which has much more memory than they require, for an old game. However, RAM is still used even nowadays, for multimedia stuff: images, video, audio, textures and other more than 64 KB sized. The optimization for these large things targets to load part of the RAM on the VRAM (GPU cache memory), in a moment the user doesn't care, like a loading scene - i.e. God of War's Kratos passing through some rocks. Sometimes this is used for loading from files to RAM too.
11:58, but he is doing it for modern hardware, isn't he? The video's goal is just to explain why Quake's alg. is not meant for all cases.
13:00, the sad truth is that these pointer transformations are UB (undefined behavour). That's why the guy commented it as "evil": he just wanted to get his job done, leaving the comment for the future masochist who will deal with the potential nasty bug. UB means the operation is not standardized. So, the app may someday start crashing or giving wrong values (out of nowhere!), if any thing change from the original setup: hardware, OS, any imaginable protocol that interacts to the game. Not even old C had an expected action for that, as long as I heard.
13:52, in math, a minus exponent means that the number is divided. So, x*0.5 == x / 2 == x*2^(-1). Instead of multiplying the whole number, it's possible to change its exponent, by sum or subtraction, which are faster operations.
11
-
9
-
1:36, unit tests are important to me because:
- Test edge cases.
- Keep testing things that you think are already right, when you change code.
- In TDD, helps planing the design. This may be controversial, when 1 doesn't still know how the f() signature will be, and it's better to discover it by developing it.
- The most important: once having an incomplete f(), and a bunch of tests, the rest of development can now become really fast!
1:55, it's possible to extract that if condition from inside the f(), making another 1. Problem is that it may be seen by the rest of the project. C/C++ has a way to avoid that, while still keeping this flexibility.
8
-
8
-
4:00, to make a projectile travel alongside a 2D plane, angle is not required. It's only needed the "derived": how much Y (vertical) it'll run for each step of X (horizontal). Once having the target, its x,y will be acquired, and then comparing it to the original (your character) x,y: (Ydestiny - Yorigin) / (Xdestiny - Xorigin).
6:47, I would make it this 1st way, as a bigger map. The spawns would be coordinates related to the screen, so it would not get a problem regarding number overflow. "Node" is a dangerous word: it always feels to me that the programmer will use linked lists (slow as hell) where he shouldn't.
10:18, pitfall here: some people think that 100 will participate on the division, when it'll just multiply the result.
10:23, a bit harder? Dude, I played Van Helsing, and it was a lot harder. The harder the better. Don't be afraid to do it.
13:54, because it's hard to imagine a harder task.
7
-
6
-
6
-
3:18, you (23:54) and Casey said something that I already knew: wait the need before abstracting. This is the right way to design: focused on the practical needs. And that means building abstractions for defensive proposes too - before somebody starts to advocate FP.
3:35, 1 of the basic optimizations made by the machine is to inline f()s. The only concern is about reaching a limit, after which it won't inline or starts extra calculations about it - but 1 can set it. The actual problem about helper f()s is the options for calling them by mistake, or to feel the need to memorize them, as options, when thinking about the whole thing. To avoid this extra stress, a class to hide them can help. But this starts to require more boilerplate. The best solution is writing lambdas (within the f(), of course), if those blocks are called more than once. Otherwise, I keep them inside the f(), as explicit code, if they only make sense there.
5:03, if 2 apps have a difference of ms for each action, people will feel better with the faster 1. So, it's not just for speed-critical projects: even the common ones can benefit. Meta said that "people feel more engaged when the app is faster" (probably about ms and above).
5
-
4
-
4
-
0:03, I'll translate this: they don't use to spend time building tools. If default Rust has everything they need, ok, go for it. But don't complain later why:
- Windows 10 is 40% slower than Linux Mint 19.2, according to my tests.
- Loose too much time compiling.
- Missed out the chance to has easy-to-use tools, conciliating speed and safety.
2:16, this just means Rust has a better default behavior for safety. As long as I heard, Rust lacks the flexibility of C++. Thus, it ends up been an inferior approach, compared to C++ coded to work exactly how the user wants.
3:41, C++ does it better: it doesn't waste speed tracking object and also deletes it for you, automatically - and allows you to delete it by hand too, if you want .
4:10, this is the problem: converting C to Rust may look as a big win, but missed to opportunity to do it to C++. For instance, that means they won't ever has a switch of "verify my code now" / "now don't verify it" (for compiling speed).
4
-
0:42, if I would take care of this, I would start by rewriting this file to something much smaller. After all, 12GB is too much. Let's say names have average of 8 letters and numbers can go up to 99.9. So they are (8 + 1 (;) + 2 (2 numbers) + 1 (.) + 1 (1 number) + 1(new line char) )*8 = 14*8 = 112 bytes per line. So, writing in a binary file, 4 bits for the last fraction, 7 for the number and ~11 for a number simbolizing the name, which should be searched later in a separated table. So 4 + 7 + 11 = 22 bits, 3 bytes. If 3 is ~2,7% of 112 bytes, it means that those 12 GB would be reduced to 12x0.027 ~= 324 MB.
This would make any kind of search a lot faster.
4
-
4
-
3
-
0:00, interesting point you mentioned a startup. C++ is my favorite by far, but I don't know if I would trust it in the hands of a junior team. Maybe if there would be a long deadline ("done when it's done" mode) or if I would be watching close what they do (an almost "pair programming").
1:19, I often use inheritance, and have no issues with that. I barely can imagine its absence.
1:40, optional type is comfortable to code and understand, but it has a concerning drawback: it carries within a potential branch, which may lead to slower code. Each f() doesn't know if it's valid or not, leading to too many IFs, ugly code, redundant work and prone to slowness. It works like a 9-months pregnant woman, making everyone apprehensive.
3
-
3:29, C++17 was a conservative and kind small standard. C++20 changed the way we write code. It's pretty elegant, still having the old C approaches.
3:42, I don't know about Rust, but C++ keeps its compatibility with crude methods from C, so that you can still solve things in dumb way. So this already defeats the allegedly "complexity" issue. It's also utterly nonsense to talk about "stability issue" in C++. It continues having its performance and unmatched flexibility, newer features cooperating with old ones.
4:15, this is pretty stupid. OO improved safety a lot. Lambdas are often used, higher level e safer than anything C is using - and 0 cost too.
9:50, there are tools for C/C++ that catch those errors, faster than Rust would compile.
13:08, C++ has the only right OO implementation and is the most flexible language so far. Unmatched on those. Aside from that, it's capable of everything C does and implements higher level features near as comfortable as higher level languages do. So it seems more like a "master of most things".
13:14, unwise.
3
-
3
-
3
-
3
-
2
-
2
-
1:53, when I code, from those I prioritize, in this order:
1) Safe. And thus Encapsulated (necessary for safety), Neat and Tidy (from outside PoV), Noninvasive/Scaleable (natural for OO), Systematic (few public things). It hurts testability.
2) Performant. This 1 tends to hurt everything else.
3) Readable. After some adaptations, it uses to become Reusable/Understandable. If possible, with a deeper thought, it may also become (at least in its public interface) Simple, Elegant and maybe more Testable.
I don't know many languages, but I ensure C++ can achieve all of this. For instance, let's say a project works intensively with strings:
1) Safe: I use std::string or something alike. It's already pretty easy to use, but if I want some more automatic, I inherit it in my own custom class.
2) Performant: let's say it's hurting performance. I don't go to C's array of char. Instead, I hide in my own class what exactly the project demands.
3) Readable. After those 2 stop fighting each other, I use to make some changes towards readability/elegancy/simplicity. Right after, testability may be the target.
At the ending, it may not look as pretty as some Python-like solution, but giving all the things it's achieving at the same time, it's much better "for its proporse".
2
-
2:50, dump C++ for what? I want/need:
- Nonpolymorphic inheritance, instead of composition;
- Actual encapsulation, breakable only by a selected group of few f()s (meaning data is actually private, not indirectly public by allowing to use filters/modifiers, as C does) ;
- Full freedom to interact over a container. This way I decide the level of use constrains, by coding my own;
- Not to be forced to lose performance, because somebody else decided it for me - GC, for instance. In C++ I code things that I turn on/off security by clapping fingers. Does Rust has that? Can I turn its slow compile on/off anytime?
- Lots of things able to work hiddenly on backstage (several anytime-optional checkings, for instance), so that I can dev. powerful tools.
2
-
2
-
2
-
2
-
9:20, it has the int8-to-platform-size_t as the precise integer. But if that doesn't need to be precise, and even more, 1 wants to make the app "future proof", a long will target the platform size, so that it'll keep being "upgraded" as platform size grows. It can even become faster throughout time, if the app was thought to make binary operations on 1 variable, if that is faster. long long then it's what it seems, twice the size, if allowed. short for the smallest or at least smaller than the middle 1, int.
12:23, cringe moment: I don't know what's this May 9th.
15:04, not all platforms allow this double the size. So something like int128_t is meant for a compiler error if not supported. long long means "the maximum possible signed size". So, if not supported, it may be shrinked back to 64 bits, since the user didn't express necessarily 128 bits.
2
-
1:07, by that do you mean "data racing" (more than 1 thread writing the same data, at the same time) ? This is easily solved since C++11, with STL <atomic> library, at compile time. The remaining issue is the "false sharing": when you have different threads changing different memories from the same cache line. So when 1 write at its portion, it "freezes" the entire cache line, not allowing the other thread to write, during that brief moment. This is a performance issue, not a bug. It's still solved by hand, having to align the data, leaving each thread to its own cache line.
1:24, what exactly Rust solves here? Those pointers are meant to acquire an opened resource, freeing it later automatically. A common C++ skill issue here is to use those pointers for data that could easily fit the caches. Since people are used to call 'new' in other languages, in C++ it'll get that memory far away, on RAM or an even worse place, becoming at least 100x slower, unless the compiler saves the junior dev.
Why C++ made life harder on that? That's because it actually made life easier: it assumes 1 wants the data on cache, thus by default it dismisses us from even having to use 'new'.
1:55, I don't know about unique_ptr. But what I know and saw, more than 1x, is that compiler is smart enough to put an entire std::vector on a cache. Assuming unique_ptr is part of it, it's prone to be free too. But of course, it depends of the memory it's holding: if it exceeds the caches sizes, it'll stay on RAM. I think there's nothing Rust can do about it.
17:12, I thought he would say that C's pointers are the same concept from Assembly. Now I'm confused, since I don't deal with it for a long time. C++ iterators do some compile time checks, while pretty much the same speed.
2
-
2
-
2
-
2
-
12:49, I recently made a benchmark of unrolled loops vs normal on C++: the 1st got almost twice the speed of the conventional way. And due to macros, I made each unrolled algorithm in 2 lines (not even f()s were compiled), vs ~6-10 lines f()s for known loops. 12:55, the reason is because the unrolled code makes it clear, for the compiler, the chance to use parallelism from a special hardware for basic operations. Follow this talk: www.youtube.com/watch?v=o4-CwDo2zpg&t=2100
13:05, to C++ I use to code macros for doing things on a safer way, and then I cut them out at once, by changing 1 line and recompiling. Could Rust do the same, by activating/deactivating its safe mode, maybe using its macros?
2
-
2
-
2
-
2
-
@0ia I think that app complexity is something much more challenging than its size. It's possible to be challenged by some concept if the project demands it right away, regardless of its size. For instance, global variables. It's a known bad design choice, but if 1 only works in large projects that don't use that for decisions throughout the code (which branches it'll take), like databases, the exposure of those variables won't be felt as a dangerous thing, as it should. But coding a game for instance, even if not large, which uses to has tons of branches, taken accordingly to values from those variables, then the programmer will be thrown in a really unbearable mess.
2
-
2
-
42:58, you are right, but you are comparing to C. In C++, just make_unique: it'll even dismiss you from having to write another line, as this defer (also avoiding the leak of not calling it) , because its destructor will free the mem., when the time comes (reaches the end of its scope).
43:54, wrong! C instead of C++ is. His use of macro, to help the limitations of C, was well applied. I used to use it a lot, in the past. C doesn't has defer-like, as long as I know.
2
-
2
-
2
-
2
-
2
-
1:38, I used to dislike #ifdefs. Nowadays, I think they are quite nice, because they help to debug. For instance, if a block of code won't be used in some compilation, that code won't actually exist, even raising a compile error in case some piece is missing. So this is already a check, a confrontation vs what the coder is thinking. And it's possible to keep switching the switches, getting a quick statistic effect about any bug.
Codeblocks IDE can "blur" blocks not targeted to compile, a pretty nice visual effect.
4:19, I agree with you, because people use to think that Single Responsibility Principle is technically only 1 thing, but I think it may be semantically, instead of technically. So a f() may do several small/tiny technical things, to achieve 1 goal. This way, outside the f(), the rest of the project can look at that f(), thinking about that goal only. It's already an isolated functionality, despite the fact it takes more actions internally.
4:31, I completely disagree here. I already wrote tons of words in a video of his, uploaded by other channel. If someone is interested, I may write some words here too.
6:28, sorry, dude, we are waiting for Carbon. If only it changes that bad syntax...
14:35, I think this is much more important than it looks. I can't prove it, but I feel like spending more energy when travelling vertically. So this should be avoided, whenever convenient.
18:02, I personally omit { }, because I love compact code. But I wouldn't argue vs this standard. I would put them on the same line, though. 18:21, in C/C++ the if isn't ruined by comment, even without { }.
1
-
34:00, I'm not following the point here. What I heard about this happens when 2 pointers may -> to the same address. It's not the case of int and double. I'm not have been following C, but speaking for C++, this is accepted, even with the same type. The issue is that C++ might lose performance:
void f(int *x, int *y) { // They may -> to same address.
*y = 3; // It might be indirectly making *x = 3 at the same time too!
*x = *x + 2;
}
It loses performance: each time 1 of these pointers appear, it must seek the pointed address, instead of working with their value on CPU registers!
Easy portable fix: if they are not const, copy their values to local variables, and bingo: ~12x faster!
Nonportable fix: compilers have different attributes, holding/pairing those parameters.
Clang compiler has a tool to report if it's losing this performance. 1 can go to compilerexplorer.org(?), put the f() there, select Clang with -OX, and there's an option I don't remember where, which will say "variable clubbered by" something, when it couldn't optimize.
However, I think this is an issue only when compiling part of the code. Otherwise, the compiler should be able to track all the pointer steps, knowing for sure if 2 of them point to the same place.
1
-
1
-
1
-
1:50, those 2 are meant for big fat resources only. But there's a lot of memory accessing in which you can earn performance through pointers. Some days ago, I was optimizing a small project, and tried a bunch of algorithms, for passing data from string to an array. Using pointers (iterators actually) was the fastest, making the whole app 10x faster than using the slowest, sscanf() from C.
7:30, in C/C++ it's not undefined behaviour, as long as I know. It only will loose performance. There's a tool from Clang compiler that shows where/when it looses that in the code.
8:55, he said that this pointer usage was an attempt to turn around borrow checker rules.
9:25, undefined behaviour is the sum of all fears in coding, because it doesn't grant to obey rules anymore, from that point. It means that, for instance, if you test your code, it won't execute it, when you read an array, it may read after its end, and things like that. So your project will crash sometimes and work on other tries!
11:30, to take away an argument in C/C++ macros, it's needed to redefine it:
#define ptr__field (*ptr).field
But gonna need to do that for the entire class (all fields), multiplied by each different pointer name!
1
-
1