Comments by "" (@diadetediotedio6918) on "ThePrimeTime"
channel.
-
331
-
247
-
67
-
For me the question boils down to a few simple concepts,
1. The object-oriented paradigm revolves around passing data between unified domains, not necessarily "classes" or "structs". That means you send information back and forth and try to make each specific domain responsible for interacting with that dataset within that scope of states.
2. The functional paradigm revolves around making code immediately obvious in a declarative way, even if that involves using data encapsulation methods and the like. The code must read fluently and it must be immediately obvious the result of an expression not by an individual stream of parts, but by the complete expression (and, of course, it must be decomposable).
3. The procedural paradigm revolves around ordering things in a specific, controlled way, not through statements or units, but through small, specific, determined logical steps that must modify or change the data as they are performed. The scope of a procedural code is always linear and it must be possible to read and understand linearly.
To that extent, I can understand that all paradigms employ common concepts and can be summarized to common notions present individually in each of them, but which are not immediately reducible to these, as a complex system. Each of them has its place and I can understand why multi-paradigm languages won and purely functional or purely object-oriented languages became less and less popular.
56
-
54
-
47
-
34
-
33
-
33
-
33
-
32
-
25
-
21
-
18
-
18
-
17
-
17
-
16
-
15
-
15
-
13
-
13
-
12
-
12
-
My sincere view, as someone who has been programming since 12, is that hard work pays off, but only if it's something you want to aim for. I'm not talking in terms of being something that you necessarily like, but of aiming for something bigger than yourself and working hard to achieve that goal, the other alternative is to work for something that you consider to be your calling. Every day I code 9-13 hours, sometimes more than that (it used to be more until my employers told me to stop for some reason), and when I'm not coding I'm reading about coding, I don't do that because I necessarily I'm looking for perfection (but of course, I'm always looking to be a better person than I was the day before), but because it's become something almost natural for me, because programming is something that interests me deeply, it's something that's part of of my life. I do not consider work as something external to me, or that there is some kind of mystical barrier between my personal and professional life, programming is part of me the same as craftsmanship is part of the craftsman, or carpentry part of the carpenter, this does not imply never doing anything different, or focusing only on that, but that doing things related to your craft is not a sacrifice but, many times, a pleasure, and I can definitely say that it is a pleasure for me.
I can clearly feel the effects that all the years of work have had on me, for me it's clearly noticeable that after all that I'm better than before, not just as a programmer but in many ways as a human being, so I definitely think that, not only hard work, but mainly the feeling of being integrated with what one works on, is the essence of a complete life. I'm not saying that you should focus 15 hours a day on it, or that you need to, but that if you want to put more effort into what you do, improve yourself through hard work, that's something that comes with its downsides, but that also will most likely yield you the expected benefits.
At such times, think about the nature of the craft, and that to each man men are "A medium-sized creature prone to great ambition."
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
@maaxxaam
I know he says this later (because someone in chat states it, not on his own to be fair), but this is a problematic instance on his previous take and other similar takes. He can't hold at the same time that zig was a better option (was, in the past, when they selected Rust as the next kernel language, I've seen him say this many times now, almost every single time he hear about Rust being selected for the kernel, since a time ago) and that zig is immature and thus it should not be used in the kernel (which is the production readyness thing), these are contradictory beliefs. He can say he thinks it is better suited for the job, which he also does, but not that "zig should've been selected instead of rust" (which he had said many times), my problem is more on the later. I also have problems with the "philosophical aligment" thing (as if Linus was not able to curate which languages should or should not be in the kernel for his own philosophical reasons) but I don't care enought to argue with it.
7
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
Also, I think the "one language for a specific purpose" is both a good take and also in some level bullshit (relating to the title of your video as well).
It is good because specialization tends to make better tools fit to their specific purposes, it is good for organization and also allows for more conciseness in what you are trying to express with code.
And it is also bullshit because learning more languages do not imply in a loss, it expands your domain over all the languages you've already learned by generalizing the knowledge, having competition is also extremely good and factually one of the most common reasons I heard from people is that they "don't want needing to learn so much" (which is lazyness³; you also don't need to learn everything, because competition exists and thus you can work with whatever you want most of the time), and also because the more specialized you are the more you lose context about the world of other things, and the more you need that 'recurrence' and fragmentation inside one workload. You can see this with people using JSON, but still inventing more and more protocols around it, or with alternative solutions to protobuf that tries to cover logic or some other bs, or even with Lua where there are like dozens of versions of it trying to generalize it for more cases or for performance-based tasks (like lua-jit or Luau [the roblox version of Lua with types and other features]). I'm also not saying this is bad, but specialization can be a good or a bad thing and it is generally harder to know the exact domain of the problems you are trying to solve (the problems you are tring to find in the real world to specialize in) than to make a general-purpose language that can be used in certain contexts more than others. I think we should have even MORE languages, more and more and more of them, because no one will fullfil all the needs of all programmers ever.
This is one of the reasons of why I think AI's can hurt the developer environment much more than aid, they are good at specific things they have tons of material to train on, and their general tendency is not to innovate but to homogeinize everything (the wet dreams of the "we already have many languages" person).
5
-
5
-
5
-
5
-
5
-
@ThePrimeTimeagen
I can also see why it sucks, but at the same time a part of me understands why they exist.
It is that, fundamentally, asynchronous functions are different from synchronous functions, when you write synchronous code you are writing something that will be processed linearly and directly by the processor, you can trust the memory that is on the stack, you can trust that nothing in the program will happen out of your control for that specific context (assuming we're not using threads of course), there may be a number of specific considerations. When a function is async, however, we're dealing with something that is essentially constantly moving around, which will need to be paused and resumed, you can't rely on your stack memory (unless it's copied entirely, which incurs other costs, and the different solutions lead to Pin on Rust), you can't count on the consistency of direct execution, you won't be absolutely sure which thread will execute your code (if we're dealing with async in a multithreaded environment like C# ) and you won't even know when (since that's the purpose of async in the first place), there are a lot of considerations that need to be made when using it (and I also understand that this is part of the tediousness of writing asynchronous code).
Of course, that said, I've suffered a lot with function colors, nothing more annoying than realizing that you want to have a "lazy" code in a corner and that to do that you need to mark 300 functions above (hyperbole), I think that in that sense, C# at least manages to partially solve this with the possibility of blocking until you get a result, it wouldn't make a difference in terms of usability if, for example, the entire C# core library was asynchronous, because you can always use a .Result and block until having the result (not that it is the most performative or safest approach, of course, but sometimes it has its purpose to unify the worlds).
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I think these are both good and bad thoughts mixed together, a portion of your audience that leans toward modern progressivism seems to have felt awful reading this, but frankly it's not nearly as bad. I would just say to that person to lower their expectations a little and seek to do these things not only to become better, but also because they are something that amuses you. Take a weekend and develop a totally different project and not related to the company you work for, read a trash popular fiction book, watch a horror B movie, make prototypes and prototypes of useless things and throw them away at the end , and also, be lazy, humanity wouldn't have gotten where it is if we didn't look for simpler ways to get the job done, these things are also part of becoming a better person.
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
If you think of game design as a profession in itself, which involves but is not necessarily complete with being a programmer, then choosing to focus on all these skills will make you a good game developer. Of course, dividing your attention will bring you less development in more specific activities, such as specifically being a draftsman, musician, scenario designer, screenwriter or programmer, but that doesn't mean that you will be bad at these tasks, you just won't be as good as someone who focuses more on developing those skills in particular. It's a dilemma similar to the notion that a general practitioner is less able to efficiently practice specialized areas of medicine, certainly general practitioners are extremely capable of treating all people, but when you have a specific problem and you have the choice, choosing someone who specializes in your problem is likely to be a wiser decision, which does not disqualify the general practitioner or his skills.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@CottidaeSEA
It is not a cached result of an interpretation, really. It turns the IL into machine code that will then run and give the result, it's not like the JIT is doing 2 + 2 = 4 then storing 4 as machine code, is more like JIT is converting:
IL ->
.maxstack 3
.locals init (
[0] int32 'a',
[1] int32 'y',
[2] int32 'z'
)
IL_0000: ldc.i4.2
IL_0001: stloc.0
IL_0002: ldc.i4.2
IL_0003: stloc.1
IL_0004: ldloc.0
IL_0005: ldloc.1
IL_0006: add
IL_0007: stloc.2
IL_0008: ret
into ASM ->
mov eax, 2
mov ebx, 2
add ecx, eax
(this is just a simple example)
Of course, JIT also does some optimizations in the process, so something like 2 + 2 would probably be optimized right into 4, but it is not a general rule nor it guides the entire generation, it is much more a just in time compilation rather than a just in time intepretation + caching.
3
-
3
-
3
-
3
-
2:30 pm
I think that was a good statement, even if it seems superficial if looked at without a specific scrutiny.
There is a pattern to these people's reviews and it usually goes like this:
* I like you, you've done things that please me and/or my friends and/or you've followed some kind of agenda that pleases me -> You're very interested, you do things because you care and that's it
* I don't like you for whatever reason, or you're irrelevant to me -> This is a company, of course it just wants to make a profit, isn't it obvious? There is an evil plan behind this company's actions because companies are evil, BOOOOOO!
When you stop seeing the world as a big platform where people are fighting and there are these abstract entities that people invoke to make their opponents look like monsters, like "profit" or "being a company", and you start seeing that there are human beings there even if they are shitty human beings, things just seem less hysterical.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@marcs9451
I think understanding contextually what it means to "dictate" something is important here, but anyway. He's doing this in what <he> thinks is the right way to "shoot yourself in the foot", and he's trying to claim that this is the best way to learn something. I don't think that's the case, not even close, I've known people who learned a lot more through a gradual climb in difficulty than anything like starting directly on the edge of the precipice could bring, and I've also known people who don't work like that. Trying to find a "perfect formula" for obtaining deep knowledge is a task that is doomed to failure from its inception.
In the same way that A can obtain deep knowledge through starting with the most complicated things and that would cause him more failures, B can obtain deep knowledge through an association for usefulness or curiosity, "shooting himself in the foot" can be a disincentive of knowledge for B and an excellent stimulus for A, I myself have been between A and B in my life and so I feel that attempts to take it that way can be dangerous to one's quest for knowledge.
But anyway, that's just a rant of mine, you can ignore it. Sometimes I feel extremely tired of people all the time trying to "point out" the best way to do this or that, as if it were possible to know what will work better or worse for a person, sometimes it's better to just let it be.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I don't think Rust is old enough to have many games developed in it, even C++ took a while before it had its first successful commercial titles and, at the time, games were much scarcer so there was more room for innovation (C++ was conceived in 1979, and the first commercial games using the language only started to be released from 1990 onwards). Not to mention that almost every time I see someone developing a game in Rust, there are a number of people to say "you are developing in the wrong language, it should be C++", "why did you choose such a strange language that is not used for games instead of C++?", people are simply creating a self-fulfilling prophecy about Rust in a way that everything that is done in the language receives a considerable amount of criticism, and when it is not done people say "I will only use it if there are products made in it", it's funny if you stop to think about it. But having said that, there are games being made with Rust and I believe that in the next 5 or 10 years, if nothing colossal happens in the industry, we should see good games made in Rust coming out.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@isodoubIet
> Of course it's a bad thing. It inhibits code reuse
It really depends on what you are calling "code reuse", I'd need to disagree with you on this one if you don't show some concrete real world examples of this.
> loosens the system modeling as you're now encouraged to report and handle errors even if there's no possibility of such
This is a sign of bad API design and not a problem with having errors as values. If you are returning a "maybe error" from a function then it <maybe> an error, it is a clear decision to make.
> increases coupling between unrelated parts of the code
I mean, not really, you can always flatten errors or discard them easily in an EAV model.
> and can force refactorings of arbitrarily large amounts of code if even one call site is modified.
Again, this is true for any kind of function coloring, including <type systems> and <checked exceptions> (like Java has). A good designed code should be resilient to this kind of problem most of the time.
> You can say "this is a tradeoff I'm willing to make". That is fine. You cannot say this isn't a bad thing.
I absolutely can say it is not a bad thing. It is not a bad thing. See? I don't think function coloring is necessarily bad, thus I would not agree with you upfront this is a bad thing. I think being explicit about what a code does and the side effects that it can trigger is a good thing, an annoying thing sometimes I can concede, but I cannot call it a "bad thing" on itself, only the parts that are actually annoying (and the same goes for when you don't have this kind of coloring and then it blows up in your head because of it, it is a "bad thing", not the lack of coloring itself).
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@danielhalachev4714
Structs, non-elided generics, advanced types/lists pattern matching, async/await, auto properties (and syntatically more appealing properties in general), native iterators, extension methods, operator overloading and cast overloading, optional parameters, objectively stack-only arrays and types (structs), opt-in native memory management with pointers and memory allocation out of the garbage collector (in .NET 6+) and more.
Just the non-elided generics on itself are already huge compared to Java, try comparing the performance of a list of bytes in Java using List<Byte> vs a list of bytes in C# with List<byte>, it is not comparable at all.
1
-
1
-
1
-
@Me-wi6ym
I have a quick lookup on scala docs for those, but I don't think case classes are similar to Rust structs. They are reasonable similar to some constructs you can do to structs, but absolutely different.
The question of structs is not that they enable a certain kind of behavior, but that they are inlined in memory. Scala is based on JVM, which as far as I know currently don't support value types (until valhalla comes or something happens), so all the custom types are boxed in memory in the "worst" case (which is much, much, much slower).
Considering scala also does not have a borrow checker (but swift has an optional ownable types feature with the ~Copyable) I feel like it can model <more> features closely to the Rust API. In the end of the day I don't think neither are exactly akin to what Rust can do, but swift for me is closer in this regard because it is also considered a systems programming language.
Still, I want to use scala a bit before giving a definitive conclusion, I used Kotlin so far but not scala yes. I think Kotlin is very close to swift in the syntax (different in many aspects, but close in many others, I personally love the trailing lambda feature of both langs).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@isodoubIet
You have a problem in understanding, I see.
My claim was: ["Plus, function coloring is required whenever you <use any type system at all>, and it is not a problem because the Rust type system can infer the types for you, so most of the time this is not a concern."], learn to read things entirely instead of nitpicking parts. Function coloring is a defacto thing for any language that has types, including C++. It do not mean that the errors will be part of that function coloring, it just means it <exists> in the language when you are using it, if the point is that <function coloring = bad> this should be a problem for you.
> Also nonsense. If you change the error handling strategy of some function deep in the call stack, you'll end up needing to refactor an arbitrary large amount of code.
Just like if you suddenly need your exceptions to matter more locally you need to refactor an arbitrarily large amount of code to catch them in every possible single call site. It also matters more for me if you are designing your systems so badly this is often necessary, but anyways. And if you use Java for example exceptions can be <function coloring> as they need to be marked in the header of functions (and if you skip them you need to mark your function or convert it to an uncked one). This is just a terrible point in defense of exceptions.
1
-
1
-
1
-
1
-
1
-
1
-
@lpprogrammingllc
See how funny this is?
You came here, randomly shitted on the language by saying they had "broken promises" by showing a compiler bug, and you cite Residual Entropy as your font, which I had watched the videos, and he seems to be very understandable on the problem, because he understands the difficulty of what it is in hands; now you are saying I'm on some kind of "rust advocate behavior" like this has ANY meaning whatsoever (spoiler: it don't). You say I'm "assuming bad faith" when "someone doesn't like what I like", and by assuming this, you are also assuming bad faith on me by thinking I'm doing this instead of having actual resons to believe so (and spoilers again: I have, and in my previous comment I cited some of them).
No, this is not a language-level bug, because this does not make sense at all, the language does not even have a formal specification to have "language-level bugs", the bug in question is a product of assumptions they needed to make when implementing the current trait solver and obviously it is not an intended behavior by any means (it is literally catched by MIRI, so it should not be an intended thing; <and also> the language, as you said, is <promising safety> in <safe code> it should imply by charity that this is not intended to pass as sound code, while it does because the verifications where not properly implemented in the compiler-level), so it is indeed a <compiler bug>. It is a compiler bug that cannot be easily fixed because doing so requires modifying many assumptions in the compiler, because this is a complex bug, but still a bug that is being fixed (and has already a fix in the new trait system), so calling it a "language-level bug" is just mean. You cited the bug report as a proof that it is a language-level bug and, for zero surprises, it does not imply that anywhere.
I'm open as well for proofs that this can be classified as a "language-level bug", but more than that, I'm more interested in know how this change anything to anyone interested in the language when the developers are already dedicating their work to fix this bug.
Yes, the bug reports are still marked as open because they are not yet in the stable language and because the new trait solver is not yet stablized, I also don't know when it will (but I've read in their roadmap for it that it will be ready for 2027), but it is being worked on, and as such is not in good faith to say they had "broken a promise" because of such a complex bug existing (a bug that has 0 records of being found in real codebases until now; a bug that can be catched with MIR which is something you should <already be using> for really ensuring your code has no detectable safety problems) that is <actively being worked on> (i.e. this is not a thing they "forgot" or "ignored").
As for this:
["Again, this is orthogonal to the real reason I will not use Rust. Which is the complete lack of trust I have in the entire Rust supply chain, because of people acting like you."]
You are free to think the bs you want to think, and to say people responding to your lies they are acting "rust advocates" (when in fact you where lying, you said it was "unlikely to be fixed without serious breaking changes", you had not even readed the material available on the problem to say that and you proved this on your posterior comment).
Either way, I'm not "rust advocate", my main language of daily use is not even Rust, it is C#, and I program in many languages, I'm not more "rust advocate" than I'm "truth advocate", and you are indeed acting in bad faith with your comments, you are being weird and shitting on things (you literally started this with your comment by citing something you DON'T understand, you literally pasted only a part of a function that <is not very unsafe> without the other part), this is not the behavior of someone who are really wanting to have a purposeful discussion over a topic.
1
-
@Bebinson_
It seems to me that after you said you weren't going to try to say how the American DMV system should work, but after that you actually suddenly said how the American DMV should work, so your statement seems just discursively empty.
This type of problem has absolutely nothing to do with the government having or not delegated a function to a company, it has to do with the nature of that function and the very notion of delegation. Delegating means granting, it means that you pass the authority of a certain task to a third party, if you read between the lines that means that the government has basically moved this burden from it to a company, and that fundamentally does not solve the inefficiencies that would be seen in the government itself. This improves the chances that something good will come of it, as companies tend to be more efficient in the way they do things, but as long as this is still a concession it is still a right to be the only one to do a certain thing, and that's what government is, and it encourages irresponsible and bad behavior like this. So the solution should not be to "nationalize" this task, but to free it up completely and allow each company to provide its own solution to the problem, and allow them to compete on their solutions in order to establish a higher degree of quality than than the government could (or worse, as the case may be). This is the only fair way to establish whether private contractors are really worse or better than the government.
1
-
1
-
I believe your calling a study an "objective fact" has not helped the perception of PrimeTime. The exorbitant amount of lying studies out there is no joke, there has literally been a crisis in the social sciences caused by statistical lies, so skepticism is always welcome.
That said, it seems reasonable that you'd be generally more productive if you had to exert yourself for less time, because just as muscles can't exert themselves for too long before becoming less efficient at taking action, the brain probably also can't focus for too long. before you get tired, however I think the bottom line is not the time you spend working, but the disposition you bring to the job, and the amount of effort you are willing to invest (in the same sense, muscles grow when they are exhausted of their strength, and the brain probably becomes more proficient the more you develop it, so we should see if in the long term these people which was working 4 day work weeks were actually better at their office than programmers working more time).
1
-
1
-
It seems to me that this whole discussion is really pointless.
It's pretty straightforward to work with the feature model, and it doesn't tend to cause as much conflict or unsyncing that people who advocate it do say, it's just a matter of making "features" point and to the point, and when features get long enough you merging, not the feature into the dev, but the dev into the feature, thus keeping the two in sync in exactly the same way as if it were a short feature, but without the problem of merging something incomplete into the dev.
Briefly
For quick and simple features:
1d. feature -> dev / Fast, easy and straightforward, you developed it and then integrated it
For longer, more complex features that require intermediate steps that are impossible to skip:
1d. feature
2d. dev -> feature / Keeps the feature in sync with changes in the dev, fixes any conflicts that appear (which should be rare)
3d (finished). feature -> dev / Finished, will probably not have any conflicts and will be complete in the sense that it resolves a given change completely
1
-
1
-
1
-
1
-
1
-
1
-
@s3rit661
I don't think this is "the same argument".
1. Checked exceptions are, in fact, optional in Java, you can quite literally ignore them and turn them into unchecked ones (and many of those that are actually important, like NPEs, are ignorable). Kotlin is a language that runs in JVM and literally ditcched checked exceptions, if this was a concept working in Java I don't think they would go to this route.
2. The Rust thing is about <memory safety>, not about "ensuring you never make mistakes", memory safety is a no-problem in languages like Java and Rust ensures you don't make mistakes with memory with the borrow checker, which is another thing than error checking mechanism. The guarantees you have there are extremely specific as well, it is not "it is impossible to make mistakes", it is "if you use the type-system, if you check your unsafe code with tools like MIRI and do proper tests on it, if you don't circumvent borrow checker rules, them you are in the possible program space where it should be impossible for you to create a memory safety issue in your program". The error as values are a bonus and they work pretty well with this system.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@shinobuoshino5066
It depends on what are you talking about "proof". Rust is literally logically checked for safety, so in this matter I just think you are blatantly talking about something you don't know. Rust, of course, cannot solve <all> classes of safety bugs, but within the borrow checker rules and the default strictness of the compiler it can <ensure> you the safetyness and correctness of the code as it is written by the logic it is written in a great deal of cases. It had proved itself on Android as well, as 20% of all new Android code is being written in Rust and google itself reported that there is almost no memory unsafety bugs discovered in this new Rust code, on theirs report: ["To date, there have been zero memory safety vulnerabilities discovered in Android’s Rust code."] (2022 report)
While this is not a definitive proof that Rust will help in practical code to solve all issues with memory safety, heuristically we can say so, because the safe-checked language is formally sound, and because we know a bunch of companies are adopting it over the time and they are not generally going back into other languages because of security problems.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@TheManinBlack9054
Was it?
I searched for it and this was what I got in the sources:
["phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook quality" data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens)"]
and
["The language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts."]
["Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value)."]
["We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered publicly available web data and synthetic data."]
None of the phi models are trained on "purely synthetic data", only mixed training data, so we don't know to which degree they are really being affected. We also don't know if the degradation is sufficiently high in the synthetic data from the early foundational models we have today for it to make a big difference versus what it will be in the next foundational models in the future and/or upgrades on them with more and more recursively fed synthetic data being part of their training.
I'm also not sure of any foundational models trained on purely synthetic data that is publicly available for checking, if you have any I would be interested in seeing them.
1
-
1
-
1
-
1
-
@yyny0
(my message apparently got swizzled, so sending it again)
This is true, yes, but arguably the overhead of doing these checks is minimal if you are constraining yourself to when they actually need to happen. And while it is true that you cannot circumvent this problem when having errors as values (unless you are willing to deal with pointer manipulation, in which case you can pretty much skip the check, but nobody does this), you can also not avoid this if you are trying to make a locality enforced error handling with exceptions (as you will need to have the try-catch everywhere in your code, and you will pay the cost of having the extra code in your function for dealing with the exception just like in the eav case).
It is also arguable if the cost of checking the stack frames that <do> need the cleanup would not outweight the benefits in performance you are claiming it has, but I did not measured this to say, maybe I'm not considering even more factors here like the non-locality of code causing a problem with the predictors in the CPU (or the whole context switching problem that is a pretty massive hit for CPUs, althought I cannot say the cost ammortizes when you don't have many exceptions).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@d3stinYwOw
Freedom should even extend to selling asbestos, which doesn't mean that the consequences of its misuse shouldn't be legally penalized. I think even you could agree with me on that. If you're using asbestos in your products and people are unaware of this and its health risks, you're clearly an aggressor and potentially a vile murderer, and it makes total sense for you to be stopped. Now if you sell asbestos making its risks clear, what exactly is the problem? I believe even you should be able to acknowledge that scientific research, for example to make the material safe, depends on the accessibility of this material in some case, and that restricting the use of something is restricting possible innovations that may arise from it and also restricting even the use in preventing misuse of this material (like for example encouraging biological organisms to find ways to solve diseases caused by asbestos). Accountability should be tied to real cases and not to society as a whole.
Finally, I'm not saying it's "black and white", I'm saying that this specific issue is a clear problem, which yes can potentially bring more safety in the long term but at the same time, can destroy or cause irreversible damage in the area of software development and free software. These are measures that are made by people who have no idea how the things we do work and yet still want to screw everyone over and use the excuse of "increasing our security". Just look at measures like the UK's "Online Safety Act" (and similar measures that have been proposed over the years in the US itself) to know that not all the security in the world is worth some things, even though it's not all "black and white".
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@yyny0
> [ Crates like anyhow? Most libraries do not use that crate, and in fact, the recommendation from anyhow author is to NOT use them for library code. This means that the errors returned from those libraries do NOT have stacktraces, and NO way to recover them. ]
Oh, now I get it. You want to have <libraries stack trace>, not <stacktrace in general>. Well, first: It is indeed possible to get the stacktrace of a program from errors as values, so your statement was wrong. You can move the discussion to the fact that this is more of a problem with libraries, and I would agree, it is a pain in this specific case, but you <cannot say> it <can't be done>, it can. Now, you said this is a problem for <errors as values>, but this is more of a problem for <Rust>. A language crafted around errors as values that have stacktraces opt-in at the consumer level of libraries would literally solve this problem and then your point would be much less interesting, so it is kinda fragile to criticize errors as values in this regard based on the design decisions of Rust.
> [We've had several "fun" multi-hour debug sessions because of that.]
So you are using Rust? What, you just said your service has many exceptions and all in the other comment, or are you talking about a different project or something like that? Also, can you explain me exactly how the lack of stacktraces in specific libraries costed you "several 'fun' multi-hour debug sessions"? I literally never experienced a "multi-hour debug session" because of something like that, and I work on a heck lot of projects, so a more concrete explanation will be good for the mutual understanding part of this discussion.
> [Also, those crates are opt-in, and even some of our own code does not use anyhow, because it makes error handling an absolute pain compared to a plain `enum`s.]
It makes? What??? It literally was made <to make using errors less painful>, given the purpose of this library, why do you find it <an absolute pain> compared to plain enums? You also don't need to use anyhow, see, you can literally easily capture stacktraces in your application by using std::backtrace::capture or force_capture, in the Rust page they even say it is a pretty easy way of gathering the causal chain that lead to the errors, you literally need to implement like 30 lines of code and use it in your whole application if you want to.
1
-
1
-
1
-
1
-
@yyny0
> [An error is incorrect by definition, it is NOT a valid representation of your system, in fact, the whole point of returning an error value is to describe invalid program state.]
What I said: ["a perfectly valid representation of a specific part of a system in a given time"], an error IS, in fact, a perfectly <valid> (in a formal sense, and sometimes in business rules) <representation> of a <specific part> of a system in a given time. When you do a login and you type the password wrong, the InvalidPassword <IS>, in fact, a perfectly reasonable and valid <representation> of state in your system (as it should be, it should not panic your software, just exhibit a simple message to the user so he can type the correct password). When you call a method to write to a file, and the file do not exist, receiving a monad where the error is represented as a possibility <IS>, indeed, a perfectly valid way of representing your program state. I just don't know why are you saying this. An error could be defined as a "undesired state given a specific goal intended in a closed system", but not necessarily as an "invalid state" if it is conceived as to be a part of the representation of your possible states. dot.
Proceeding.
> [As for statistics: our production product has ~20k `throw`s across all our (vendored) dependencies (of which ~2k in our own code), and only 130 places where we catch them. Most of those places also immediately retry or restart the entire task.(...)Additionally, 99.9998% of those tasks in the last hour did not return an error, so even if the cost of throwing a single error was 1000x the cost of running an entire task (which it is not), it would still be irrelevant.]
You said "it should never happen", and then you showed me your personal data and said it happens 20k times, I would say this is a damn high number to say it "should never happen". Also, you did not give me any timeframe, not a comparison between different services, giving me one personal example is just anedoctal example. How exactly I'm supposed to work with this and say there are not very statistically representative systems that <do> have an impact with throwing exceptions for everything? This is a pretty specific response.
> [I would consider that "grinding to a halt".]
You consider restarting a program immediately after an error the same as "grinding to a halt"?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1