Comments by "Scott Franco" (@scottfranco1962) on "Brodie Robertson" channel.

  1. 39
  2. 9
  3. 7
  4. 6
  5. 4
  6. 3
  7. 3
  8. 3
  9. 3
  10.  @lorabex791  1. I work at Google, so no. As in they may be trying to do it somewhere, but not here. We still use it. Whatever people think of C++, they worked very hard to make sure it was efficient. 2. Drivers are different than kernel code. The majority of drivers are created in industry (I have worked on both disk drivers and network drivers), and a lot of it is C++ nowadays, which is the preference. There are a lot of Windows drivers in C++, and it is customary to borrow code between windows and Linux drivers. 3. I wasn't talking about replacing any code. This is about new drivers. I don't have a dog in this fight. As a driver/low level guy I do most of my work in C, but increasingly I have to do C++ for higher level stuff. Google loves C++ (despite what you heard). Rust is "said" to be gaining traction at Google, and I have done deep dives in Rust, so I'm not (completely) ignorant in it. Any language that claims to be a general purpose language, IMHO, has to have the following attributes: 1. Generate efficient code. 2. Interoperate with C since that is what virtually all operating systems today are built in. This includes calling into C and callbacks from C (I don't personally like callbacks, but my opinion matters for squat these days). [1] Rust is fairly good at calling C, and gets a D grade for callbacks. Golang is basically a non-starter, because they have such an odd memory model that interactions with C are expensive. Any language can interoperate. Python can call into C, and its an interpreter. Its about efficientcy. Obviously its a moot point at this time, since "Linus an't a gonna do it", but it should be discussed. C++ is too important a language to ignore, especially if rust gets a pass. Notes: 1. There are some rocks in the stream of C, including zero terminated strings and fungible pointers (pointer parameters that are NULL for "no parameter" and small integers for "not a pointer"). Most languages outside of C do not support some of all of this. These are bad habits in C and are eschewed these days, see the various arguments over strcmp vs. strncmp.
    2
  11. 2
  12. 2
  13. 2
  14. 2
  15. 2
  16. 2
  17. 2
  18. The problem in Unix/Linux is that, for a host of reasons, users are encouraged to take root permissions. Sudo is used way to often. This breaks down into two basic issues. First, users are encouraged to install programs with privileged files or files in privileged areas. Second, in order to fix up problems, it is often necessary to modify system privileged files. The first issue is made far worse by effectively giving programs that are installing themselves global privilege to access system files and areas. Its worse because the user often does not even know what areas or files the program is installing itself in. The first issue is simple to solve: install everything a user installs local to that user. Ie., in their home directory or a branch thereof. The common excuses for not doing this is that "it costs money to store that, and users can share", or "all users can use that configuration". First, the vast majority of Unix/Linux system installations these days are single user. Second, even a high end 1tb M2 SSD cost 4 cents per gigabyte, so its safe to say that most apps won't break the bank. This also goes to design: a file system can easily be designed to detect and keep track of duplicated sectors on storage. The second issue is solved by making config files or script files that affect users local, or having an option to be local, to that particular user. For example, themes on GTK don't need to be system wide. They can be global to start but overriden locally, etc. A user only views one desktop at a time. The configuration of that desktop does not need to be system wide. My ultimate idea for this, sorta like containers, is to give each user a "virtual file system", that is, go ahead and give each user a full standard file tree, from root down, for Unix/Linux, BUT MAKE IT A VIRTUAL COPY FOR THAT USER. Ie, let the user scribble on it, delete files, etc., generally modify it, but only their local copy of it. The kernel can keep track of what files are locally modified by that user account, akin to copy on write paging. You can even simulate sudo privileging so that the system behaves just like straight Unix/Linux, but only modifies local copies, etc.
    2
  19. 1
  20. 1
  21. 1
  22. 1
  23. 1
  24. 1
  25. 1
  26. 1
  27. 1
  28. 1
  29. 1
  30. 1
  31.  @boembab9056  If you scale the screen, you are letting the OS/Presentation system draw things bigger for you. If you do it in the application, the application is doing the "scaling", QED. So lets dive into that. In an ideal world, the presentation system is taking all of your calls, lines, drawings, pictures, and scaling them intelligently. In that same world, the Easter bunny is flying out of your butt. All the system can really do is interpolate pixels. Lets take a hypothetical for all of you hypothetical people. They come out with a 4m display, meaning 4 megapixels, not 4 kilopixels. You scale %100, meaning "no scaling". All of the stupid apps look like little dots on the screen because they are compressed to shit. Now we scale some 1000 times to get it all back. If the scaler does not consider what was drawn, but just pixels, its going to look terrible as scaled, just as if you blow up a photo on screen far in excess of its resolution. Now the apps that are NOT stupid, but actually drew themselves correctly, are going to look fine, perhaps just that much smoother because they took advantage of the extra resolution. Now lets go one more. I know this is boring, drink coffee, pay attention. Drawing characters at small point sizes is a problem right? People worked out all kinds of systems like "hints" to try and make fonts look good at small point sizes like 5-8 points. But you bought that 4k monitor and that 4k card, and THEN you bought a fast CPU to push all of that data around. Guess what? That 5 point problem you had is gone. Just gone. There is sufficient resolution to display fonts on screen down to the point where you can barely see them. Now ask yourself. How does a scaling algorithm do that unless it DRAWS the characters at that resolution? Keep in mind that programmers spent decades on true type formats and computed character drawing to match mathematical curves to pixels. Is an interpolated scaler going to do that? No, no it is not. Peace out.
    1
  32.  @boembab9056  Look I know you are a smart guy, but think about what you are saying. If the application knew how to take care of its own scaling, the OS does not need to do anything, no scaling at all. The typical flow is: 1. If the application has never come up before (default), it takes the measure of the screen, then presents itself according to a rule of thumb, say 1/4 the size of the screen. 2. Size the fonts according to the onscreen DPI. Ie, it you have 12 point type, then choose an onscreen font accordingly. Points are 1/72 of an inch, so 12 point type is 0.16 of an inch in height ON SCREEN. 3. Set other dimensions accordingly. I personally use the point size to dimension everything else on screen, and I have found that works well. 4. If the application has executed previously, then just use the last window size. That is a reasonable expectation for the user. Do that, and no scaling is required. The app knows what to do. If you think about it, what scaling REALLY does is accommodate stupid applications that don't understand how to scale themselves properly. I follow all of the rules above in my applications. I'll readily admit that I had to do some work to get to 4k displays. Mostly it was because I used standard (and it turns out arbitrary) measures to size items in the apps display. Also when moving to 4k, I implemented a standard pair of keys to let the user adjust the size of the apps display (ctl-+ and ctl--, same as Chrome and most other apps). This is the correct solution. Rescaling all applications because SOME programmers don't know what they are doing is not the right solution, and, indeed, it actually punishes the applications that did the right thing by messing with their scaling instead of letting them do it themselves.
    1
  33. 1
  34. 1
  35. 1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1
  45. 1