Comments by "" (@traveller23e) on "Fireship"
channel.
-
166
-
143
-
61
-
55
-
37
-
36
-
35
-
34
-
32
-
32
-
31
-
30
-
30
-
27
-
25
-
24
-
24
-
23
-
21
-
17
-
17
-
16
-
15
-
12
-
12
-
11
-
11
-
10
-
10
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
I have Arch on one machine, NixOS on another. They have fundamentally different use-cases.
Arch is fantastic if you're decently good at configurations and have a single machine you frequently need to adjust settings on, especially thanks to its incredible docs. It also does a pretty good job of ensuring you don't end up with meaningless duplicate packages. However, if you want any kind of partitioning of program installation it starts to work against you, e.g. it's easy installing a program on the system but hard to install it for just one user. It's also hard to install multiple versions of stuff if you need that (not normally a problem but it could be). Additionally, in a fairly normal install it's more or less incompatible with Haskell development due to the fact that by default Arch uses dynamically linked libraries and Haskell revolves around statically linked ones (technically options for dynamic linking exist, but mileage may vary and to give an idea some related critical bugs have sat unfixed for years).
I installed nixOS for the facility of operations with Haskell, and I can see how it would be great if you needed to install identical systems on multiple computers. It allows having packages installed only when you're in certain environments making it really easy for groups to share their build dependencies, however what's not mentioned in the video is that understanding how any of that works is very difficult due in large part to lack of clear guides and when there is documentation it tends to only cover one small part of the setup so it's up to you to figure out how the components work together. Additionally, if you're doing a lot of tweaks to your config firstly they take forever to test and secondly you'll quickly end up with lots of different versions of your machine each with its own set of binaries etc. Of course you can clean that up. The other thing is that although the versioning saves the state of your installed programs, it does not save the state of your homefiles which could easily include configs written by those same programs so it could be rolling back doesn't restore some data deleted or updated in the interim.
7
-
7
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
4
-
@voidvector Ironic you mention Java, since it still doesn't have generics working properly in a lot of the more basic classes (methods just return object and stuff, so you lose the type checking). Also, in Java and C# things like lambda expressions keep things from being pure OOP. However, I would posit that that is not a bad thing; pure-OOP is 1) a concept revered by programming philosophers, without much intrinsic merit to the programmer, and 2) a concept people claim to try for yet don't really seem able to consider in concrete terms, sort of like Clean Code or Agile. (Note to clean coders: it's not that I dislike the concept, just it doesn't have any amount of objective measurability and can when unchecked lead to voodoo programming.) One thing I think would be interesting to see would be a language with fully-fledged support for multiple paradigms but with compiler directives forcing you to explicitly label sections as hybrid OOP/procedural or functional or whatever. It would be interesting to see how people react with having to make an explicit decision like that.
Actually, I'd propose adding this feature to Powershell, as it could only make it better.
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not quite, he missed out on bitfields, several of the keywords (static, register, inline, volatile...depending on C version), and completely omitted mention of the CPP which is pretty important to say the least. Also, he didn't mention all the datatypes, nor go into some of the more complex topics like integer promotion, which overflows are undefined behavior and which are defined, and the challenges of having potentially different byte sizes on different platforms.
1
-
1
-
1
-
1
-
1
-
I'm a fairly junior programmer, and the stuff I've gotten to work on has been so boring compared to my personal projects. Not just because I don't care about the program, the requirements are interesting, but like...guys, not everything needs to be written exactly the same way. Seriously, everything seems to be done with classes, not structs (even where value-type semantics would be better), it's like people are scared of using a class directly rather than in interface for that class (the other day I was asked in peer review why I'd defined the return type of a method as a List<T> rather than an IEnumerable<T> when the method created the new list internally 🙄), abstract classes seem to be a largely forgotten concept, not to mention the "clean coders" who will spend half an hour of review time debating something really trivial but then when asked about the performance implications of writing a LINQ statement one way versus another can't say anything on the subject. I'm trying to switch from C# to C for my next job, in the hope that the C crowd will have a more performance-oriented perspective. Unfortunately most of the openings are for C# and Java webservices and stuff though.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
OOP languages tend to be good, but if someone tells you to stick to strict OOP principles ignore them. Like any other tool in programming, use it when it makes sense. If your code would be better with a lambda expression somewhere or whatever, do that. The strict OOP cops will tell you not to use inheritance ("antipattern") or static methods, for an example of how over-the-top some of the stuff gets.
My controversial opinion would be that it's best to just stop worrying about it and just write code that's readable, low-effort, uses the best tool for the job, and (if applicable) efficient. If the best way forward involves a goto, then fuck it and stick a goto in there. Just double check it was the best way first, and choose a more laid-back coworker to do the peer review.
Also, side note: given the choice of C# and Java, go with C#. It's virtually the same, except generics aren't broken in all the core methods ;)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Note about the "a value cannot be null, but you can make it nullable by adding a question mark".
The short answer is this is inaccurate, but a full explanation is that there are two kinds of data types, reference types (defined by classes and interfaces, e.g. string) and value types (defined by structs, e.g. int). Reference types can be null, however value types cannot.
Therefore, if you create a new variable of a value type (as in "SomeType variableName;") it'll be set to the default value, but for a reference type it'll be set to null. If you need a nullable value type, that's actually a fundamentally a different type and is written with a question mark at the end. So, int? can be null, but int cannot. If you need to compare them, a cast is involved (if I recall correctly the int gets implicitly cast to int? prior to comparison). There are also other important differences between value and reference types, so if you're new to the language at some point be sure to learn them.
However, for some reason the designers of C# decided that this was confusing to newcomers [citation needed] and so in the most recent versions of the language by default there's a warning (hence the yellow squiggle, not red) if the compiler detects that a value of reference type that's not explicitly nullable (the question mark) could at some point in the execution become null. This has the advantage that you don't have to worry about null checking if the value isn't marked as nullable, assuming your dev team does a good job of paying attention to warnings and resolving the issues. The code base I'm working on currently has something along the lines of 4700 warnings if my memory serves (it's also an older version of C#, from before they decided to start all this non-nullable reference type stuff). However, there are a few disadvantages to this behavior too.
One is that you have to explicitly mark the type if you ever plan on setting the value to null, a frequent thing to want to do. Additionally, if you're trying to work only with non-nullable variables it can make some patterns more cumbersome (e.g. "SomeType returnResult; if (something) {doStuff(); returnResult = someValue;} else returnResult = someOtherValue; Log($"Returning {returnResult}!"); return returnResult;"). One could also argue that it can confuse people at a fundamental level about the differences between classes and structs, sort of like in Java the line between abstract class and interface was somewhat muddied when they introduced default implementations in interfaces. And then you get the people like me who just hate it for no solid good reason other than "this isn't how I learned it!" and try to justify their hatred through other arguments.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1