Youtube comments of (@diadetediotedio6918).
-
1100
-
331
-
311
-
247
-
232
-
209
-
171
-
129
-
121
-
112
-
101
-
90
-
85
-
83
-
77
-
72
-
69
-
67
-
65
-
4:36
No, really, this is really not that true. Even in assembly days people were already trying to get rid of it because it was just not that practical. It obviously sounded like something hard to grasp, like "how the 'compiler' will make better optimized assembly code than us", but the unpracticality of assembly was very clear from the beginings (people already used a LOT of macros at the time, and this makes writing assembly way less manual labor than writing it manually, so the jump is not that far).
It is completely different when you are selling a bs statistical machine that may or may not produce working code and, in the worst case, delete your whole database because people became so reliable on it they don't check things anymore (100% of the code writen, we know people take more time to read code than to write it, it is way harder to understand, the probability of hidden bugs is way higher as well, etc). The problematics of this are WAY different in this sense.
56
-
For me the question boils down to a few simple concepts,
1. The object-oriented paradigm revolves around passing data between unified domains, not necessarily "classes" or "structs". That means you send information back and forth and try to make each specific domain responsible for interacting with that dataset within that scope of states.
2. The functional paradigm revolves around making code immediately obvious in a declarative way, even if that involves using data encapsulation methods and the like. The code must read fluently and it must be immediately obvious the result of an expression not by an individual stream of parts, but by the complete expression (and, of course, it must be decomposable).
3. The procedural paradigm revolves around ordering things in a specific, controlled way, not through statements or units, but through small, specific, determined logical steps that must modify or change the data as they are performed. The scope of a procedural code is always linear and it must be possible to read and understand linearly.
To that extent, I can understand that all paradigms employ common concepts and can be summarized to common notions present individually in each of them, but which are not immediately reducible to these, as a complex system. Each of them has its place and I can understand why multi-paradigm languages won and purely functional or purely object-oriented languages became less and less popular.
56
-
54
-
54
-
53
-
47
-
47
-
46
-
45
-
43
-
42
-
40
-
37
-
36
-
36
-
35
-
34
-
33
-
33
-
33
-
32
-
31
-
30
-
29
-
28
-
27
-
25
-
25
-
24
-
24
-
24
-
23
-
23
-
23
-
22
-
22
-
22
-
21
-
21
-
21
-
21
-
20
-
19
-
19
-
18
-
18
-
18
-
17
-
17
-
@entelin
I'm also a programmer, and I've actually read how the systems work (in addition to playing actively for quite some time). The things that happen to the health system effectively have material ballast, arms and legs, musculature, bones and cartilage, all these things are real entities in the world of DF, and they are subject to being affected. I don't know if you consider this a "gameplay factor", I consider it an important storytelling factor that the game effectively brings to the user that literally no other game has ever done (not RimWorld, not Oxygen Not Included, not even CDDA - although the latter is quite advanced). In RimWorld the health system is also interesting and advanced, but ultimately based on a much smaller and less detailed compositionality than DF, you literally haven't played enough of the two games to know that they have stark differences from each other.
Of course, to be fair to you, some things are just decorations yes, the way to describe many things for example, and some logs that happen, really many things are just randomly generated texts to entertain the player, however, even they have their importance in the ability to create an emergent narrative (which is the goal of DF). In all other things, either you understand their importance in the game simulation world, or you're just spitting on someone else's nearly 20 years' work, as a programmer you should be more responsible with the things you say out there (and no, you don't know what happens, because you clearly haven't seen all of Tarn's talks, let alone read the game's source code, I've seen a fair amount of the talks and articles and unless he's lying radically, a lot of that verbosity is heavily tied to simulating the full complexity of game interactions).
Of course, maybe DF just isn't your type of game, it's a game made for your brain to interpret things that occur in a narrative and consistent way (although they are not so consistent, even if simulated, as an example the memories of the dwarves), but that's about it.
17
-
16
-
16
-
15
-
15
-
15
-
15
-
14
-
14
-
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
11
-
11
-
11
-
My sincere view, as someone who has been programming since 12, is that hard work pays off, but only if it's something you want to aim for. I'm not talking in terms of being something that you necessarily like, but of aiming for something bigger than yourself and working hard to achieve that goal, the other alternative is to work for something that you consider to be your calling. Every day I code 9-13 hours, sometimes more than that (it used to be more until my employers told me to stop for some reason), and when I'm not coding I'm reading about coding, I don't do that because I necessarily I'm looking for perfection (but of course, I'm always looking to be a better person than I was the day before), but because it's become something almost natural for me, because programming is something that interests me deeply, it's something that's part of of my life. I do not consider work as something external to me, or that there is some kind of mystical barrier between my personal and professional life, programming is part of me the same as craftsmanship is part of the craftsman, or carpentry part of the carpenter, this does not imply never doing anything different, or focusing only on that, but that doing things related to your craft is not a sacrifice but, many times, a pleasure, and I can definitely say that it is a pleasure for me.
I can clearly feel the effects that all the years of work have had on me, for me it's clearly noticeable that after all that I'm better than before, not just as a programmer but in many ways as a human being, so I definitely think that, not only hard work, but mainly the feeling of being integrated with what one works on, is the essence of a complete life. I'm not saying that you should focus 15 hours a day on it, or that you need to, but that if you want to put more effort into what you do, improve yourself through hard work, that's something that comes with its downsides, but that also will most likely yield you the expected benefits.
At such times, think about the nature of the craft, and that to each man men are "A medium-sized creature prone to great ambition."
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
9
-
@blallocompany
No? Most of the time people are doing this, it is not an example of "limited situations". And again, to stop reinforcing this bs, we are not "emitting words", we are [chosing] words. Stop biasing your own vocabulary. We chose words deliberatedly [all] the time, which is my point, even when we are making accidental mistakes we are [chosing] our words and not just "emitting" them, it is a deliberate act even when it is mistake, and that's literally [why] it is called a mistake in the first place. A mistake is only possible if we [intended] to make a correct decision, a decision is only a decision if it has [intent] and [volition], both things LLMs don't posess. LLMs, in fact, don't even "make mistakes", we use those words with them because it is easier to say than "they selected the statistically most probable words that, in this specific case, lead to a factually incorrect final cohesive answer interpreted by the human mind", it is the same when a program does something that you as the programmer did not intended it to do and you say it is "misbehaving" or "behaving incorrectly", the program is behaving perfectly correctly according to the explicit instructions you gave it, what is wrong is the encoding of the desired instructions (by the programmer) into it.
And to end it, you are chosing those words to justify your position, you literally is having introspective power to the next word you are writting right now, and when you read my comment again, you will think about it and start responding to the thing with your own words intending to respond me with a specific line of reasoning.
9
-
9
-
9
-
9
-
@theeldenpring
Hahaha, I think you misunderstood my point, really. AI's are nowhere near what a nuclear bomb is in comparative terms. The reason for my analogies was in relation to some of the more common arguments that are possible to be found out there, such as that AI will somehow "steal artists jobs", as this was the same argument used by people when factories were becoming popular, in that sense my bringing this up was equivalent to bringing a Luddist with a similar position to yours into the argument. Another reason why I brought this up is because even if you somehow don't like a technology, you are not able to simply dictate the course of society and say whether or not that technology should be stopped, because there are currently individual people with AI's , there are small companies using it, there is a whole market that was built on top of the notion that AI's are elements that can be used in their own way. Finally, on the issue of force, to the extent that you believe that something can be "prohibited by law" you are advocating that we hit people who make use of that something, and that ultimately these people are killed, that's why came the notion of "simple, no?", because in practice enacting a law that prohibits something that may be on the physical computers of millions of people is to authorize censorship and state control at a level never seen before, it is a way of licking boots so absurd it makes me nauseous.
9
-
8
-
Nah, that causes more jobs to exist, JavaScript is a flexible language and the creation of various frameworks and libraries allows diversity in market, there's no "enterprise" behind JavaScript that will make the frameworks default, so the community do, and the community causes diversity what is both good and bad.
Good because with diversity more openness will be added to the market, criativity flows and better solutions will get their moment as the ecossystem grows, it is like the evolution itself, nowadays pure JS is far less used to develop web because these thousands of frameworks prooved better and easier to handle for a majority of people than vanilla, and they compete with each other what will made eventually some disappear and some not.
The bad part is, anything that is made by humans die like humans do, with the time these projects will be just abandomned and hundreds of sites will be stuck in old and unmainanted systems, they will need to migrate to better solutions and that is not every time possible, diversity can be bad too because it hardens the decision making process and can actually make devs less productive, and with diversity there are other caveats like, the number of people necessary to actually develop a good and mature framework and the number of developers really assigned to one existent because they want to develop their own.
But is like that, everything in programming, more, in life, is doomed by some good things and some bad things, an exchange, JavaScript is like some experiment that make we see what occurs in this type of environment, and in my vision, it is really good, the community around JavaScript is actually making the language richer without ever touching it directly.
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
@bangunny
Nah, I think this is a misconception. You can use tailwind before knowing CSS, and use your tailwind knowledge to use CSS better, or it can be the other way around: with your CSS knowledge you can know how to use tailwind better.
In principle, this is possible because tailwind is very descriptive, it says to you what it is doing, "bg-xxx" is clearly about a "background color", "w-xxx" is about "width", "pb-xxx" is about "padding bottom", etc... you learn by the naming and by the experience when using the tokens. After that you will use CSS, and you will start discovering there are those same keywords but in an "extended form", so the knowledge is very transferable.
8
-
7
-
7
-
7
-
7
-
> *Mining bitcoin does not create wealth, just as money printing does not. It merely redistributes existing wealth.
This is true.
> *Society does not benefit from bitcoin mining (only the miners do). That's why the energy is wasted, even if it is green or unused.
This is blatantly false, you should read the whitepaper and familiarize yourself with how the technollogy works and what it is aiming to solve before saying something like this. It absolutely does have a social benefit.
> *Imagine a democracy where you need to dig holes and fill them up to be able to vote. Do you see the nonsense? People work, but the work is unnecessary!
The nonsense is the democracy yourself, but I would argue your democracy idea would be WAY better than what we currently have, before people would need to actually put their effort where their mouth is, populism by itself would not solve it.
> *If proof of work has to be used, let's do work that is beneficial, like training AI, not finding big numbers!
Again, this is ignorance.
7
-
7
-
7
-
7
-
@turnert2005
They don't need to have, but it's the PARENTS DUTY to do that, not the government's.
Kids had degenerate content available for them ALWAYS, from magazines to even going to a beach with family, and sometimes even walking on a street, and even at propaganda, and even at their own minds, and in schools.
Internet was not the cause of this, it was never the main problem, but a complete LACK of parenting and the desacralization of parental education. Everything is good if the government says so, but the parents can say nothing, this shit is destroying the world.
7
-
7
-
7
-
7
-
7
-
7
-
@JohnTitor2036
I tried to replicate your "no biased environment", removed my custom instructions and gived it the How it works and also added your phrase 'I feel it's gonan be used to help targeted ads' in the end of my sentence asking the opinion of it.
The response, of course, was not even close to what you got:
```
The mechanism you've described, known as "web environment integrity attestation", appears to be a proposed system for verifying the integrity of a device running a web application. By attesting to the integrity of a user's environment, this system can provide stronger assurances about the security of web interactions. In a basic sense, it functions as a form of digital trustworthiness certification.
In theory, such a system could have a range of beneficial applications, including enhancing security and reducing fraud. For example, an online bank might use it to ensure a user's device hasn't been compromised before permitting a high-value transaction.
However, as you've noted, there are potential privacy concerns associated with this kind of technology. While the document specifically states that a unique device ID should not be included in the attestation, it does propose the inclusion of other low-entropy device information. Low entropy data is less unique and individually identifiable, but when combined with other data, it could potentially be used to create a more specific profile of a user or their device.
In particular, concerns may arise if this mechanism is used in ways that weren't intended or are not in the user's best interests. For example, it could potentially be used to aid in tracking users across websites, even if that's not the primary intended use. Such data could also potentially be leveraged for targeted advertising, as you mentioned. Although the document proposes the use of a "content binding" to prevent intercepted attestations from being misused, it's not clear how effective this would be in all cases.
Overall, it's essential for any implementation of this technology to prioritize transparency and user control. Users should be fully informed about what data is being collected, who it is shared with, and how it is used. They should also have the ability to opt out if they choose. Robust privacy protections will be critical in ensuring this technology is used responsibly and ethically.
It's also worth noting that the description provided is of a proposed system and it's not clear yet how it might be implemented in practice or what specific standards or regulations might govern its use. Any concrete analysis would depend on these and other factors.
```
I even tried to explicitly bias the AI by asking 'surveilance capitalism?' and it stil had not got that far in affirmations:
```
The term "surveillance capitalism" refers to a system where the economic model involves collecting, analyzing, and leveraging user data for profit. This model is often associated with large tech companies, such as Google or Facebook, who offer "free" services in exchange for data, which is then used to provide targeted advertisements, among other things.
The web environment integrity attestation system you described could potentially be co-opted into such a model. If the data collected through the attestation is used to build a profile of a user's habits, preferences, and behaviors, it could contribute to a more detailed understanding of the user, which could then be monetized through targeted advertising or other means. This could be seen as a form of surveillance capitalism, if the information is used primarily for profit rather than for enhancing the user's experience or security.
However, it's important to note that this is not an inherent aspect of the technology or the attestation system itself. It would depend largely on how the system is implemented and used. If stringent privacy protections are put in place, and the data is used primarily for the purpose of enhancing security or user experience, it might not fit into the category of surveillance capitalism.
It's also crucial to consider the context. For instance, a system that is used in a corporate environment for security purposes may have different implications than one used by a social media company that profits from selling ads.
In the end, whether or not this falls into surveillance capitalism depends on how the technology is implemented and used, and whether user data is commodified for profit in ways that may not fully respect privacy and autonomy.
```
So I don't know what exactly you consider to be 'no bias', but it does not appear to me that was the case here. You can share your chat code easily to prove that if you want and for allowing more precise replications, I will not doubt a direct evidence.
7
-
7
-
7
-
7
-
7
-
7
-
@maaxxaam
I know he says this later (because someone in chat states it, not on his own to be fair), but this is a problematic instance on his previous take and other similar takes. He can't hold at the same time that zig was a better option (was, in the past, when they selected Rust as the next kernel language, I've seen him say this many times now, almost every single time he hear about Rust being selected for the kernel, since a time ago) and that zig is immature and thus it should not be used in the kernel (which is the production readyness thing), these are contradictory beliefs. He can say he thinks it is better suited for the job, which he also does, but not that "zig should've been selected instead of rust" (which he had said many times), my problem is more on the later. I also have problems with the "philosophical aligment" thing (as if Linus was not able to curate which languages should or should not be in the kernel for his own philosophical reasons) but I don't care enought to argue with it.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
I think this is a misguided opinion. Software developers are engineers, but also architects and also bricklayers, we do this all at once because software is that complex. It is obviously different from the physical engineering, but also not that different from all kinds of engineerings. It is plainly possible, for example, to build a simple robot as a proof of concept without needing to carefully plan it before, you need to have the right intuitions. Curiously, it is also possible to build a house without planning, I live in a country where this is kinda common and they tend to last for decades and more, they are obviously not marvelous constructions but are not explosive (bridges are in another league, but even they sometimes are made this way).
Being a software engineer, you don't need to care about physics (unless we are talking about performance), so it is usually not extremely dangerous to make mistakes, it is also expected as it has dozens or hundreds of moving parts, where in a bridge you only need to make it stable, reliable and durable. Those are different professions, you should not collapse their definitions like that to compare and say "this is not engineering and this is".
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@blallocompany
What you mean it "goes nowhere"? And I am free to pick the words I cam up with, what are you on about?
The fact that there's a set of defined words don't mean I need to adhere to them, I can quite literally make up new words on the spot using physical concepts as guides, like "claxchackles" where clax imply the action of eating something and chackles imply all the fruits that have the orange color. Now what your point is again?
Plus, it's not a less free decision because you have limited options, it's expected that you could only chose from already defined options (otherwise you would be the one creating the choices that you would then need to pick anyway), chosing means selecting X in place of [Y, Z, W, ...] and all of those things need to exist in order for the choice to exist, you always chose from an existing set.
Also, Greeks didn't had that sophisticated notion of volition in their times, this is closer to modern philosophy of mind and even language than that. And buddhists could be wrong all they want to, saying I'm an observer of the actions I am literally partaking of is no less false than a schizophrenic saying that the red dragon he sees behind me exists.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@
What exactly is this question supposed to mean? I live in brazil, here most cheap computers have this amount of disk space. Maybe "most" is a bit hyperbolic, but the defaults are around that, up to 500GB sometimes, but those are for newly bought ones. People that live in small cities have old machines from sometimes more than 15+ years ago, which means like second generation processors and such, small amounts of RAM (until I got my last PC, ~7 years ago, I had like 2GB or 4GB of RAM max, and after that I used one with 6GB for some years, the disk space was not much more than 128GB, it was almost always full). It's not that uncommon here.
5
-
5
-
@PunmasterSTP
I don't think this has something to do with "knowing what you are doing" tho. If you take a random selection of neurons from your brain and put them into a petri dish, they would not be a "memory" or "retrieving something", they would be just cells in a petri dish dying slowly and trying to survive the new harsh environment. What makes a brain is the collection of all connected cells, and what makes a memory and the process of retrieving is the mutual work of those connected cells, it is a fundamentally non-reductive process and, in this sense, the common sense explanation of "I am retrieving something from my memory" is already sufficient (and probably one of the best we can do) for explaining the process. Descending this would actually lose information relative to this process instead of increasing it, so it is not only sufficient but adequate.
In other words: knowing about the causal process behind a conscious process does not increase the amount of knowledge about the conscious process, and knowing about the subjective nature of the conscious process don't increase the amount of intrinsic knowledge about the causal process. You cannot go from "this is the concept of a bird" into your neurons, if you go you will only see "a bunch of neurons firing together and wiring together" and the most you will know is "those neurons fire together and wire together when the subject thinks about the concept of a bird", this would say nothing in and on itself about the actual concept of a bird in the subject's mind. The same way, you need to actually study the causality of the thought "this is the concept of a bird" to increase your knowledge about the way the brain works, by itself the subjective description is not perpassable. This is what we usually call "the hard problem of consciousness", and it is the reason we don't need to be able to be conscious about the causal processes involved in making consciousness happen to say that "we know what we are doing most of the time".
It is also not exactly something just "popping into my mind", but rather I'm being directed towards some subject and evoking the thing into my mind, I'm saying those things are not purely aleatory or unknown, but rather basically volitions of your own.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
Also, I think the "one language for a specific purpose" is both a good take and also in some level bullshit (relating to the title of your video as well).
It is good because specialization tends to make better tools fit to their specific purposes, it is good for organization and also allows for more conciseness in what you are trying to express with code.
And it is also bullshit because learning more languages do not imply in a loss, it expands your domain over all the languages you've already learned by generalizing the knowledge, having competition is also extremely good and factually one of the most common reasons I heard from people is that they "don't want needing to learn so much" (which is lazyness³; you also don't need to learn everything, because competition exists and thus you can work with whatever you want most of the time), and also because the more specialized you are the more you lose context about the world of other things, and the more you need that 'recurrence' and fragmentation inside one workload. You can see this with people using JSON, but still inventing more and more protocols around it, or with alternative solutions to protobuf that tries to cover logic or some other bs, or even with Lua where there are like dozens of versions of it trying to generalize it for more cases or for performance-based tasks (like lua-jit or Luau [the roblox version of Lua with types and other features]). I'm also not saying this is bad, but specialization can be a good or a bad thing and it is generally harder to know the exact domain of the problems you are trying to solve (the problems you are tring to find in the real world to specialize in) than to make a general-purpose language that can be used in certain contexts more than others. I think we should have even MORE languages, more and more and more of them, because no one will fullfil all the needs of all programmers ever.
This is one of the reasons of why I think AI's can hurt the developer environment much more than aid, they are good at specific things they have tons of material to train on, and their general tendency is not to innovate but to homogeinize everything (the wet dreams of the "we already have many languages" person).
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@Kenjuudo
No, it is, indeed, a problem. And a problem don't suggest is has a solution: e.g. the end of the universe is a problem for all living beings, yet it is innevitable, it has no solution. The end.
Next, the problem was never about how you "call things", it's about things we directly perceive in our everyday lives and how they LEAD to this distinction naturally, I don't need to assume your premises when the simpler case (which is, those things, neurophysiological brain processes in third person and the ontology of consciousness in first person, are in fact obviously distinct) is self-evident.
Saying "everything is a process" also does not solves the problem, it only masks it.
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
No, I don't think this is completely fair. As far as I know, there is no stablished, either implicitly or explicitly, consensus on this. We only know it happens, because we exist, and the explanations are varied, but you cannot "agree" with something you cannot explain directly.
And also, even if this was a true consensus, it is not true that there would be an implication for machines, as far as we know all complexities are different from eachother. For what we know consciousness could be a unique property of physical complexity and specific chemical reactions in biological tissue, not something that a computer program could replicate. Or it could be something not natural, it's not exactly fine to simply assume it is.
4
-
@ThePrimeTimeagen
I can also see why it sucks, but at the same time a part of me understands why they exist.
It is that, fundamentally, asynchronous functions are different from synchronous functions, when you write synchronous code you are writing something that will be processed linearly and directly by the processor, you can trust the memory that is on the stack, you can trust that nothing in the program will happen out of your control for that specific context (assuming we're not using threads of course), there may be a number of specific considerations. When a function is async, however, we're dealing with something that is essentially constantly moving around, which will need to be paused and resumed, you can't rely on your stack memory (unless it's copied entirely, which incurs other costs, and the different solutions lead to Pin on Rust), you can't count on the consistency of direct execution, you won't be absolutely sure which thread will execute your code (if we're dealing with async in a multithreaded environment like C# ) and you won't even know when (since that's the purpose of async in the first place), there are a lot of considerations that need to be made when using it (and I also understand that this is part of the tediousness of writing asynchronous code).
Of course, that said, I've suffered a lot with function colors, nothing more annoying than realizing that you want to have a "lazy" code in a corner and that to do that you need to mark 300 functions above (hyperbole), I think that in that sense, C# at least manages to partially solve this with the possibility of blocking until you get a result, it wouldn't make a difference in terms of usability if, for example, the entire C# core library was asynchronous, because you can always use a .Result and block until having the result (not that it is the most performative or safest approach, of course, but sometimes it has its purpose to unify the worlds).
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@pedrolucasdeoliveiralara8199
Lol, if your question is that rust does'nt solve problems the average programmers face then you should not recommend C/C++ either because for most programmers this will just be harmfull, C++ is a huge mess of features and information and C is almost low-level as it should be, the "average programmer" in this sense should stick only to languages like C#, Python, Ruby, Java, Kotlin, Julia and things like that.
But this also is fake, because using Rust is a pleasuring experience and people can actually do hobbie projects in the language, so your point is just misguided. Also, using a language like Rust can improve the way people think on the architeture of programs, so it is surely worth learning.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@entelin
If you really want to know, the dwarf with the finger cut off is probably going to be incredibly damaged in the process of creating a work of art, and this can completely influence how he sees the world (and yes, they can commit suicide or they can go crazy, think of the dwarf who has a dream of creating a work of art and cannot do it).
I think at one point you're right, the depth of reading all those texts and little things in DF is incredibly broad and it goes without saying, there are simpler ways to do all of this.
However, I think that one thing you did not understand, this game does not have an end, and its success factor is precisely its complexity. Tarn didn't want to create something simple, he wanted to create something extremely complex, he wanted to simulate as many existing things as possible. If you think you've "mastered the game" then you probably haven't played it enough (the game is still being actively developed, even the economy system Tarn intends to come back with in the future), it's all the complexity of the interactions between the different little ones. systems that create this narrative depth.
If DF wasn't so complex, if it was "as simple as RimWorld", I just wouldn't play it, and it wouldn't have such a pertinent fandom. It's the same with Cataclysm: Dark Days Ahead, it's an absurdly complex game that only has the fans it has because of its complexity, otherwise it would be just another survival game in an apocalypse.
DF is a generator of stories, a generator of immersive narratives, it is that even before being a colony sim, in that it beats any currently existing game.
It's like I said, this game is not for you. I've been playing for a long time and I've never felt "bored" with anything in the game, recently a dwarf of mine committed suicide in a tree and I was incredibly blown away by the situation, it's always these interesting things that you just have to watch out for worrying that they make everything so interesting and fun, I wouldn't trade a single feature of the game for a comparative equivalent in RimWorld or elsewhere. Of course, the game needs to be simplified in a lot of things, things that can be done, and that will probably be done, the Steam release is what will make all of this possible over time. A complex and immersive torytelling is the word.
I also don't find RimWorld's combat more interesting, it's certainly very good, but it doesn't compare to DF, really. I think it's more fluid, in that sense it's better, yes, but the details matter to me more than the fluidity in that sense.
4
-
@kodicraft
Darling, I regret to say, but I think you're utterly ignorant in what you're talking about. Even if North Korea wasn't a socialist dictatorship, it wouldn't be a 'capitalist dictatorship' in any possible world, first because 'capitalist dictatorship' isn't a phrase expressing any concrete concept, and second, because the very few 'dictatorships' that weren't anti-capitalist in some sense, like that of Pinochet, operated their economies fundamentally differently from the dictatorship of North Korea.
You asked me for a socialist characteristic of North Korea, here it is:
A significant portion of North Korea's economy is centralized
Most means of production are state-owned
These are the prime characteristics to classify a state as socialist or not. There are a few other subtleties, but the discussion can revolve around these two to simplify.
As for your claim that I'm 'confusing socialism with dictatorship', they're not entirely different concepts in every sense of the term. A significant portion of modern great dictators held personal ideals that could be considered 'socialist' in some senses of the term, and the revolutionary socialist ideology itself preaches, literally, something called 'the dictatorship of the proletariat,' so trying to dissociate the two things as if they have absolutely nothing to do with each other seems a certain dishonesty or ignorance on your part about your own theory.
4
-
4
-
4
-
I think these are both good and bad thoughts mixed together, a portion of your audience that leans toward modern progressivism seems to have felt awful reading this, but frankly it's not nearly as bad. I would just say to that person to lower their expectations a little and seek to do these things not only to become better, but also because they are something that amuses you. Take a weekend and develop a totally different project and not related to the company you work for, read a trash popular fiction book, watch a horror B movie, make prototypes and prototypes of useless things and throw them away at the end , and also, be lazy, humanity wouldn't have gotten where it is if we didn't look for simpler ways to get the job done, these things are also part of becoming a better person.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@apsifox5874
Dear, socialism literally caused a bloodbath everywhere it was implemented. And I really want to understand what you're going to call socialism here, which country is socialist, had time to develop, and 'completely eliminated illiteracy, lack of jobs, and homelessness'? Because from what I've seen, you don't even consider Venezuela to be socialist because parts of its economy are private. By that notion, you couldn't even consider Cuba to be a socialist state. We need to do an analysis to see this, and about your question, if everyone is 'wearing gray' and mired in mud and hunger, then yes, it is considered poverty even if there is no one 'beneath me'.
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@autistadolinux5336
In fact, I didn't mean it. Traits are quite common in some languages and it's not a new concept at all, what's new about rust is its memory management, something I've never seen done by any language before it (I mean the way it's done) . Out of very few resources like this the language could be compared with literally any other, monads, immutability, discriminated unions, these are concepts present in dozens of languages out there, immutability by default and motion semantics (by default) is one of the things which makes me think it breaks a little bit, but not much, with C-based languages (just see for example the way most C-based languages simply copy the values during assignment, this is not a default behavior in Rust), which is why I don't think it has that much to do with C++.
3
-
3
-
3
-
3
-
If you think of game design as a profession in itself, which involves but is not necessarily complete with being a programmer, then choosing to focus on all these skills will make you a good game developer. Of course, dividing your attention will bring you less development in more specific activities, such as specifically being a draftsman, musician, scenario designer, screenwriter or programmer, but that doesn't mean that you will be bad at these tasks, you just won't be as good as someone who focuses more on developing those skills in particular. It's a dilemma similar to the notion that a general practitioner is less able to efficiently practice specialized areas of medicine, certainly general practitioners are extremely capable of treating all people, but when you have a specific problem and you have the choice, choosing someone who specializes in your problem is likely to be a wiser decision, which does not disqualify the general practitioner or his skills.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@CottidaeSEA
It is not a cached result of an interpretation, really. It turns the IL into machine code that will then run and give the result, it's not like the JIT is doing 2 + 2 = 4 then storing 4 as machine code, is more like JIT is converting:
IL ->
.maxstack 3
.locals init (
[0] int32 'a',
[1] int32 'y',
[2] int32 'z'
)
IL_0000: ldc.i4.2
IL_0001: stloc.0
IL_0002: ldc.i4.2
IL_0003: stloc.1
IL_0004: ldloc.0
IL_0005: ldloc.1
IL_0006: add
IL_0007: stloc.2
IL_0008: ret
into ASM ->
mov eax, 2
mov ebx, 2
add ecx, eax
(this is just a simple example)
Of course, JIT also does some optimizations in the process, so something like 2 + 2 would probably be optimized right into 4, but it is not a general rule nor it guides the entire generation, it is much more a just in time compilation rather than a just in time intepretation + caching.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2:30 pm
I think that was a good statement, even if it seems superficial if looked at without a specific scrutiny.
There is a pattern to these people's reviews and it usually goes like this:
* I like you, you've done things that please me and/or my friends and/or you've followed some kind of agenda that pleases me -> You're very interested, you do things because you care and that's it
* I don't like you for whatever reason, or you're irrelevant to me -> This is a company, of course it just wants to make a profit, isn't it obvious? There is an evil plan behind this company's actions because companies are evil, BOOOOOO!
When you stop seeing the world as a big platform where people are fighting and there are these abstract entities that people invoke to make their opponents look like monsters, like "profit" or "being a company", and you start seeing that there are human beings there even if they are shitty human beings, things just seem less hysterical.
3
-
3
-
3
-
3
-
@cemreomerayna463
1. I became angry because of your attitude towards my response, not because you offended me directly, nor because a "stupid technology". I'm just generally tired of AI apologists and people like the other guy in this comment that talk bs without any kind of appropriate reflection. But either way, sorry if my tone was not ideal for a discussion, I'm fine and this is all past now.
2. And what I'm saying, what I literally said in my comment, is that this is a fundamentally, computationally untractable problem. You understand what are the implications of this? The implications are that [it is not getting more reliable], or better, that the [reliability gains are marginal]. For one, reliability implies a grounded conscious compromise with the truthness of a sentence, you say someone is reliable when that person has a good amount of proven knowledge, has the right intentions (to seek truth) and has peers confirming that veracity, those conditions are generally reasonable to expect when we define reliability. Now, AI does not have 2 of these, it does not have true knowledge in any sense, it literally just spits tokens, it is [literally] how it works, you can make an AI say almost literally anything with the correct prompt, this is far from being possible to humans and is obviously a terrible sign for this prospect. It does not "understand", it spits the most probable token, it can be steered towards more "reliable" responses by reinforcement learning and other techniques (like dataset filtering or "grounding" with things like RAG and similars), but it is still fundamentally just spitting tokens in a specific order, there's no knowledge and it fails to suffice the condition (1). For (2), AI obviously does not have conscience, it also does not have any kind of known morality, it can just immitate and spit, it is extremely easy to understand why they can't by implication also not "compromise with the truth" nor "tell the truth" by any means imaginable, they are really just intrinsecally not reliable and that's the point. For coding the implications are exactly the same, coding language is not a "regular grammar", I don't know from where did you got that impression, most if not all mainstream programming languages are literally context-free grammars with specific context-sensitive aspects, even while structured (because they need to be parsed efficiently), they are obviously far from being as complex as natural language, but nowhere as simple as a "regular grammar". It is also the case that coding is extremely complex in and on itself, and even the best, most advanced, "reasoning" models make arguably extremely silly mistakes that you would expect from a complete amateur (like literally creating ficticious packages out of thin air, writing disfunctional code that does dangerous things like deleting what is not supposed to delete, and literally just having a basic to terrible understanding of coding patterns and expected solutions), I've used all models since even GPT-2 (obviously, it was unable to code almost anything but extremely short one liners that were terribly wrong almost all the time) to GPT-3 (terrible at coding, but was starting to enhance) up to 3.5 (way better, still terrible), 4 (mid at best, still very terrible), 4o (almost the same as 4, but a bit more precise), o1 ("reasons", but still commits the same basic mistakes I saw in 4o over time) to o3-mini-x (which is not that much better than o1). Those models are not more "reliable", they are better at making less obvious mistakes (which is arguably more dangerous, not less, as now you need to understand the semantics of the thing to catch those), they are making less brute mistakes, they are still making absolute copious amounts of silly problematic errors. Their "reliability" is getting marginally better with each new innovation, so what I'm saying is here.
3. This is not only false, but also a dangerous way of thinking in and on itself. See (2) for reasons why the reliability of humans is inherently less problematic, and truer, and even more: humans take responsibility of their actions, they are moral agents in the world, while AI agents are, again, just spitting words. If a human make a terrible fatal mistake, he would be fired or even sent to jail, he would have nightmares with his mistakes, a bot making mistakes is just a sociopath, cannot be held accountable, cannot feel anything, it's unpredictability is [absolutely dangerous], while humans have absolutely developed ways to deal with their uncertainty (ways that work, we literally delivered the man to the moon with a software so small that would be uncomparable with a hello world compiled in many of the most modern languages). Your response is unsuficient, and problematic.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@marcs9451
I think understanding contextually what it means to "dictate" something is important here, but anyway. He's doing this in what <he> thinks is the right way to "shoot yourself in the foot", and he's trying to claim that this is the best way to learn something. I don't think that's the case, not even close, I've known people who learned a lot more through a gradual climb in difficulty than anything like starting directly on the edge of the precipice could bring, and I've also known people who don't work like that. Trying to find a "perfect formula" for obtaining deep knowledge is a task that is doomed to failure from its inception.
In the same way that A can obtain deep knowledge through starting with the most complicated things and that would cause him more failures, B can obtain deep knowledge through an association for usefulness or curiosity, "shooting himself in the foot" can be a disincentive of knowledge for B and an excellent stimulus for A, I myself have been between A and B in my life and so I feel that attempts to take it that way can be dangerous to one's quest for knowledge.
But anyway, that's just a rant of mine, you can ignore it. Sometimes I feel extremely tired of people all the time trying to "point out" the best way to do this or that, as if it were possible to know what will work better or worse for a person, sometimes it's better to just let it be.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@user-zs6es5yr6b
Yeah, I did it, you know why? Because affirmation without justification goes in, affirmation without justification goes out, what is not argued to can be dismissed also without arguments.
And for your other affirmations, they have no economic grounds here. If the sense of "paying the minimum amount of money possible" means the workers can earn thens of thousands of dollars a month then sure, it is really the market, and also the workers are willingly to earn the more money they can, as the consumers are trying to buy things as the less expensive as possible and the business to sell the goods as high as they can, this is just basic economic theory and is not a source of justification for his arguments. Your theory cannot explain why programmers for example in my country earn more than 10 or even 1% of the population (I'm from brazil as apparently you already know) while still being workers, they earn so much because of simple offer and demand and thus your conclusions can't be true.
Also, literally because I'm brazillian I can say to you for sure that it was not capitalism that made us poor here, we had a socialist party at power for almost 8 years sequentially (and more if you count previous governments), and the "fascism" was nothing but a centrist that was a big democrat (supposing you are saying this about the last elected parasite bolsonaro). Our economy is heavily regulated since the root and we have one of the most complex tax codes in the world, and things has gotten worse and worse by the time, if this is capitalism then you must live in a dream socialist land on your side.
3
-
3
-
3
-
3
-
3
-
I don't think Rust is old enough to have many games developed in it, even C++ took a while before it had its first successful commercial titles and, at the time, games were much scarcer so there was more room for innovation (C++ was conceived in 1979, and the first commercial games using the language only started to be released from 1990 onwards). Not to mention that almost every time I see someone developing a game in Rust, there are a number of people to say "you are developing in the wrong language, it should be C++", "why did you choose such a strange language that is not used for games instead of C++?", people are simply creating a self-fulfilling prophecy about Rust in a way that everything that is done in the language receives a considerable amount of criticism, and when it is not done people say "I will only use it if there are products made in it", it's funny if you stop to think about it. But having said that, there are games being made with Rust and I believe that in the next 5 or 10 years, if nothing colossal happens in the industry, we should see good games made in Rust coming out.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@MrjinZin0902
Yes, C++ has duct tape over its problems, the question should be whether this historically solved the problem of security in software (spoiler: no). If I take 10 random repositories on Github made in C++, from the tests I have done, the probability is that a significant portion of them still use pointers and manual memory management, this is trying to severely modify something that is already established in a way, add a new way of thinking about things. Think about the language's perspective, the amount of educational material built on top of it that doesn't use it, the amount of tutorials and examples, and the amount of additional learning difficulty that using a concept like this adds to the language. Just think about it, if this feature were really effective, we wouldn't be considering using something like Rust in 2022.
Rust on the other hand doesn't depend on additional learning, all his material is based on the notion of security, all the things you learn take this fundamental fact into account, if you make a silly mistake the compiler will yell at you and tell you why are you wrong. When I learned to program in C++ many, many years ago, the unique_ptr feature already existed, however I had never met a single person who recommended its use. A few years later I went back to finish learning for some uses, and still I didn't find anyone who recommended these techniques in the countless tutorials I delved under. Rust is also about a lot more than memory safety, it's about concurrency safety, this is something that comes by default, it's incredibly more complicated to write code that incurs concurrency issues with the language than it is with C++, and that's another one of its selling points.
There's a lot more to consider than features of a language, you also need to think about the human structure behind it, and you need to think about the code structure it will compel you to do (Rust will compel you to make code immutable by default, will compel you to manage references in a way that is safe to share, will compel you not to leak memory from the first moment you use the language, this is something that cannot be overlooked).
2
-
2
-
@MrjinZin0902
There's a cost to everything, you should know. C++ doesn't solve the concurrency problems, that would be something very complicated, to say the least, with a language like that (of course, if you believe it solves it, you're free to demonstrate).
I believe the simplest way to solve something is to give reasons to support it, I believe Rust is a more interesting tool than C++, because Rust has built-in security by default, which makes insecurity explicit and security implicit ( unlike C++). And because Rust is effectively touted as a safer replacement for C++, even companies like Google and Microsoft have been talking about it for some time now, I'm not saying they own the truth or anything, just that it's a fact that Big players are worrying so much about something like security risks caused by bad memory management, there's something fundamentally wrong with the way software in general has been developed. And if even Torvalds, a difficult and rather inflexible person, is considering Rust to be placed inside his precious kernel, I think this is no coincidence.
See? We have many, many, many signs of reality that C++ is effectively being used the wrong way or just doesn't solve the problem. In either of the two answers the simplest solution is to ditch C++, and Rust has been considered that way because it eliminates most of the things that cause security problems by default.
This is a very simple conclusion.
2
-
2
-
2
-
@rt1517
First off, the linux kernel is big, and nobody is talking about rewriting <the whole thing> in Rust, it is an adaptative and incremental process (which is already being carried on on other operational systems like Android, which 21+% of the new code is Rust). Secondly, the compilation in Rust is known to be slower on the first times, but generally fast on the other times, and it can be improved by splitting the project into multiple modules (which is an actually used resource, and it just works, most of the projects don't take more than 1s to compile after they are compiled the first time when properly split in modules), so this is not an immense and unsolvable problem (and of course, the Rust team is also working to improve on compile times). That's sorted out, right?
Fine, then:
RAII is used in the official Rust documentation as an analogous of the C++ functioning, it is true and I acknowledge it that Rust do uses a kind of RAII in it's workings, but it has some differences from the C++ way of doing it, and while it is okay to refer to the Rust model OBRM as "RAII", these differences appear significant to me in the way they integrate with the compiler and the previsibility of how things will go (related to the borrowing and ownership systems). But that's fine, and still, it is obvious from the context, you didn't provided any reasoning on why it is not when it is clearly defined and very intuitive from the beginnings: If you go out of scope, whatever that scope is, the resource is dropped. The ownership/borrowing system makes it obvious that it <will> be called whenever the resource is owned, and the syntax is pretty clear about who owns who, in this sense everything can be easily deduced. You can counterargue with that, but this appears to be not a extremely controversial thing.
Next, the struct "may not implement the Drop trait", but this is not a general problem of the language, because fundamentally <you don't need to implement it manually>, the Drop trait is inferred as a basic concept in the language and it is implicitly implemented in these cases (which imply that the memory will be dropout, as intended, explicit Drop implementations are far more specific and goal-directed, so this is a no-problem).
Of course I understand your point that "a function is called and we cannot see it", but this is a more opinionated take than anything, and as you realized Linus don't care about this because of the benefits of the language over the drawbacks (if you can call this a drawback on itself, as I said, the Drop's are mostly obvious from the scope). The Linux kernel has a lot of macros and nasty stuffs as well, including some 'implicit' things if I remember correctly seeing them in the commit logs and some articles, so I don't think most of people that develop kernels care about this so much to make this a real problem.
For the last part, I don't see who are you to judge what is a "good C code review", maybe you are a 30+ yeard old programmer like Torwvalds himself and see things most other people don't see, but I'm not seeing him complaining about this like you are. As for the parts you are refering to, they have a rational motive behind:
```
// Note that errors are ignored when closing a file descriptor. The
// reason for this is that if an error occurs we don't actually know if
// the file descriptor was closed or not, and if we retried (for
// something like EINTR), we might close another valid file descriptor
// opened after we closed ours.
```
As they explain, this is a thing that cannot be avoided in this context, this does not means that in the kernel code they will use this implementation and not a custom one (and they are making a bunch of them as well, if I remember correctly they are making a whole utility box for this), for most of the user-level code this function not checking if the file descriptor was closed does not effectively matter, it is a safety measure over what cannot be known in this specific context, it is an understandable and documented choice and, if you can solve it, you can improve the Rust codebase by sending a pull request on it instead of just complaining without solutions.
2
-
@rt1517
1. Nop, the comment is explicitly saying that we can't know if the file descriptor was closed or not, and if we try to retry it again or handle the error (and they cited EINTR as an example) that they might close other valid file descriptor opened after they closed theirs. It is not saying this is only the case for EINTR. As I said, if you care you should probably close it manually and verify instead of relying on implicit behavior in this case, this is not a problem of the language itself but a worry on the developer (as it should be with these more explicit things, as we have with sync_all), a no problem on itself.
["And the issue is known and discussed by the Rust community, for years. Yet no good solution has been found."]
As for this, you should probably suggest it then, as you are saying this is a mistake and that the linux already solved this problem, what about it?
2. ["There is a kernel in C#.
Does it means that C# is good enough to be introduced to the Linux source code too?
I wrote a Windows driver in Delphi in 2010. It worked. Maybe Delphi should be introduced to the Windows kernel?"]
You are being fallacious here, this is not nice bro. I'm saying modern companies are investing on Rust for kernel/driver/OS development, I'm not saying some hobbyist is doing these things or using it as a mean of auto-validation of the language, instead, I'm saying that the language <is being appropriated> according to the kernel/driver/OS developers. But your question is pertinent if C# was actually an appropriate language for kernel/driver/OS development, which it is not. If C# did not had problems with memory management, performance and/or was made for the purpose of systems development (like Rust was), then surely C# or maybe even 'Delphi' would be suitable for the job. You appear to be dismissing C# because you think it is not suitable, which begs the question, C# is not suitable because of N factors, while Rust is suitable because of N factors and the fact that there are successful implementations of it in important systems is an indicative <more> of it's appropriatedness besides the theoric stuff (and obviously, I'm not saying Rust is appropriate based solely on the fact that it was used).
3. ["Linus is well known to be an arrogant thick headed guy.
He wrote Git after not finding a good source-control management system."]
Yes, and now git is one of the most used source control management systems in the whole world of software development, maybe he's right after all, no? Again, I don't think you have the property to question his decisions or say "he became weak with age" if you don't have something as big as what he did and contributed to show here, this is bs.
2
-
2
-
@isodoubIet
> Of course it's a bad thing. It inhibits code reuse
It really depends on what you are calling "code reuse", I'd need to disagree with you on this one if you don't show some concrete real world examples of this.
> loosens the system modeling as you're now encouraged to report and handle errors even if there's no possibility of such
This is a sign of bad API design and not a problem with having errors as values. If you are returning a "maybe error" from a function then it <maybe> an error, it is a clear decision to make.
> increases coupling between unrelated parts of the code
I mean, not really, you can always flatten errors or discard them easily in an EAV model.
> and can force refactorings of arbitrarily large amounts of code if even one call site is modified.
Again, this is true for any kind of function coloring, including <type systems> and <checked exceptions> (like Java has). A good designed code should be resilient to this kind of problem most of the time.
> You can say "this is a tradeoff I'm willing to make". That is fine. You cannot say this isn't a bad thing.
I absolutely can say it is not a bad thing. It is not a bad thing. See? I don't think function coloring is necessarily bad, thus I would not agree with you upfront this is a bad thing. I think being explicit about what a code does and the side effects that it can trigger is a good thing, an annoying thing sometimes I can concede, but I cannot call it a "bad thing" on itself, only the parts that are actually annoying (and the same goes for when you don't have this kind of coloring and then it blows up in your head because of it, it is a "bad thing", not the lack of coloring itself).
2
-
2
-
2
-
2
-
@amputatedhairstrands
You say, the millions of people dying of hunger mainly in Africa, in countries that are literally socialist dictatorships? My apologies, I hadn't understood. After all, we know that life expectancy in capitalist countries has practically doubled compared to earlier periods, and that hunger in human history has never been so low. Also, the relative number of well-fed people in countries with greater economic freedom tends to be much higher than in countries where this doesn't exist. But we need to achieve a perfect world, or else we aren't qualified to talk about anything, right?
If we take some of the countries most associated with hunger, like Yemen and Sierra Leone, a significant portion of them either have a history of flirting with socialism (or dictatorships and guerrilla groups with a socialist bias), or they have an absolutely laughable amount of market and peace due to constant wars. But it's capitalism's fault, right? Makes total sense.
I also find this game of 'no access to health services' extremely amusing when the very notion of health services has developed to this extent precisely because of capitalism and the advances that the industry has provided. Or do you think peasants in periods prior to the Industrial Revolution lived 'happy in their disease-free communes, with access to free quality medicine'? Or that primitive tribes that even kill their babies literally because they can't feed them are also fruits of capitalism? It seems to me a pretty heavy measure of ingratitude against the benefits that you only enjoy thanks to the market and initiatives that only existed in capitalism.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@autistadolinux5336
I wouldn't say it's C++ like in fact, I've had the "chance" to work with C++ for quite some time, and while I'd say Rust isn't close to C, it's not exactly close to C++ either. The truth is that Rust draws from many different sources, including a very strong influence from functional and multiparadigm languages (even Ruby gets into this), and while it has serious similarities to C++, the same could be said for most modern languages out there. In my view, Rust is much more of a functional language than an object oriented one for example, it simply has different ways of dealing with things (e.g. the notion of immutability by default), and in some ways I would say it is a language unique in itself.
That said, I only use rust when necessary, I mean, you don't need to use a low-level language 90% of the time, C# or even JavaScript are quick and easy solutions to use. If I needed to make a small low-level project fast I would probably use C for the simplicity and tactileness of the language, and if I needed something more robust and secure I would use Rust, languages are just tools and we need to get the ones that serve us the most.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@anon_y_mousse
First, I was responding to your comment about novice C programmers being wiser than Rust ones, and responding in general as well.
Next, I didn't make any pointless arguments, the fact that you have little knowledge about economics does not make you oblivious to the consequences of this lack of knowledge, if a company the size of Microsoft (a literal corporation) and with the amount of information that she has (which is quite broad) is saying that most of her security bugs came from memory errors (and she is saying that), so we very likely have good reason to think prima facie that these languages create a favorable environment for this kind of bug. A lot of the linux kernel bugs are due to things like that too, many of which could have been avoided with a safer tool like Rust (or not, after all, we have to test things before we know they will work).
The argument that the creator of linux would be considering adopting the language in his dear kernel just because of "pressure" is completely irrational, that yes, it's just a cheap complaint and you're disregarding all the informational scope that Linus himself should already have considered when accepting to include the language in the development. He's not stupid, he's not doing things by simple pressure, that didn't even happen with C++, which has one of the most toxic communities I've ever seen in my life, and it wouldn't happen with Rust.
The point is, we use magnetic screwdrivers to avoid the mistake of dropping a screw, yet experienced people rarely drop a screw using an ordinary screwdriver. We use cars with smart sensors to avoid crashing, yet good drivers are unlikely to crash using a car without these sensors. We consider taking precautionary measures on our own future actions with insurance and the like, even though we are very confident that we will not go wrong, and we take precautions even though we are experts in our fields. Science has checks because even the greatest investigators can make obvious and crass mistakes. We use safety equipment even though good workers will probably never have an accident. Even the best doctors use better equipment even if the old equipment did the trick, just to lessen the possibility of an error.
There is absolutely no reason to believe that a programmer, even the greatest and most experienced programmer in the entire world (who, by the way, won't live forever), won't make mistakes, that's innocence, that actually borders on complete insanity, and using a tool that reduces the propensity for silly mistakes (and even serious mistakes) to occur is just a natural step in technological evolution.
And make no mistake, maybe Rust is not that tool that will prevent serious security errors from happening, maybe the language doesn't work for that and everyone who says it works is completely wrong, who knows? The idea is that if we never try something new, we will forever make the same mistakes. If Rust doesn't work, let's build something better and test it again, if that doesn't work, again, and again, we'll keep going, until we've found a way to effectively make working people's lives safer, even if for a small margin, that's progress.
2
-
2
-
2
-
2
-
@mskiptr
[1/4]
My problem is indeed with copyright in general, but copyleft is a special instance of it so this is part of the reason.
Also, without clarification your "artificial concept" thing does not make much sense, I understand in what sense in my understanding intellectual property is artificial (it needs to be maintained by the state using coercion for it to exist), but this don't apply to "all property in a way" (except you are using a concept as broad as saying "artificial is when it is only appliable in humans" or something in this line). But I will not be pedantic on this, intellectual property is so artificial that for it to be possible you even need to violate normal private property.
[2/4]
I also don't like the term "ancap utopia", what I defend is possible right there and now and I live it everyday to a certain extent. Also, I find the notion that "as we don't live in a fair world, so we should play the game and do unfair things" extremelly problematic.
[3/4] & [4/4]
I mostly agree with the overhall feeling of this, but not with some pressupositions and conclusions, so I'll quote them and respond accordingly. To simplify the discussion and make it more productive, I'm handling patents, copyrights and everything else as in the same corresponding place as IP in general.
> Because someone else learning about your idea let's them do as much as you can, some people will try hard to never let that happen which is usually a ton of unproductive effort.
I don't think this is strictly true, no. I think the original promoter of an idea has a novelty effect, a pioneerism, over the rest of the competition. His name is part of the idea and the product he offers, and this has some impact over how things will play. Also, this pressuposes that the idea is shallow and already written in stone, many ideas can improve in different manners over time, and the person that came with the idea have an advantage over people that only copied it, his mind is already given with the shape of the idea and his ability to give rise to interesting things is a factor.
> Yes, they are a bit unfair towards people that might have invented the same thing independently.
In this case it would not be just "a bit unfair", it would be absolutely, extremely and terribly unfair. I don't understand why you are smoothing the real problem in this scenario, of blocking people of having the same idea as you did as if you were the owner of the idea floating in their minds (even with them having come with it independently).
> But if they are only granted for a short time and only for novel ideas, these cases should be very rare.
I don't think this is measurable, and also, what is a "short time" and what is the scope of "only for novel ideas". Who will judge and decide over these two: you, some corporation, the state?
> On the other hand, they provide an awesome alternative to hiding your invention and they encourage sharing your knowledge. Overall a massive improvement for just a small potential injustice. Imo a worthy sacrifice.
I don't think people would just "hide their inventions" and "not share knowledge" just because IP was not a thing, just as they had ideas and shared them way before IP being a thing on itself. Property only makes sense because people can steal them from you, alienate over your control and violate your self, it don't exist to "incentivate production" or "to make it worth for you to having a business", it's existence is not conditioned on whatever profits it may or may not give to you. The same should be a valid thing for IP, I don't see why I someone should be punished or imprisioned just because someone will get mad if "his idea" is not being enforced by law. I have no problem with IP being a thing people joke around like "oh look I made this, this is mine so you can't have it", I have a problem when this means "you are now obligated to pay me for your 'unfair' use of my idea, if you don't pay me you will be imprisioned, and if you resist enought you will be killed", and this is precisely what people say in reality when they are defending this "incentive of not hiding ideas".
Moreover, open source is a thing today already, and it shows people are willing to contribute, have ideas and share them even when they are intentionally putting their code into a domain that anyone else can freely copy, modify and distribute (like putting the code in MIT, BSD, Apache-2, and some other funny ones), so you don't need to believe in the fable that IP is "the thing" that makes people have ideas and share them.
You also should ask yourself if this is "worth it" when it also means that healthcare for example will be so much more costly than it needs to be, and the production of medicine is being monopolized by a few industries that have IP rights, making the lifes of potentially millions of people that would in a free market have a much better quality of life in the hands of people that don't care about IP.
> With inventions, they likely wouldn't have access to it otherwise, but can we confidently say that without copyright people wouldn't usually share their works? I'm not really sure
I really don't know, but I don't think this justifies your position in this same sense. I know for example that people shared knowledge and had novel ideas before IP laws were a thing, I know people would still demand and have needs for things if it was not a reality today, so my position is that this is way more probable than not. Either way, I don't use this to justify the entirety of what I believe, this is just a bonus, my reason is that using coercion in people to enforce that laws is a problematic and unjustified thing.
> Also, why the heck software falls under copyright like books or movies and not under patents like other utilitarian works? It doesn't make much sense imo.
I think because software is also creative work to some extent, and it has to be some originality when writting it. But these are things you should be asking yourself about, I don't think they should be.
Also, what is your defende of GPL here again? If you believe that patents, copyright and IP in general should be a thing without much regards, why are you having problems with normal proprietary software? Or do you just think GPL is really just another form of IP and (as you defend them all) this is why you are defending it?
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Let me correct some problems here.
["The root of these words is basically "discrimination". You can only be discriminated (or, in simpler terms, "bullied") when you're a minority. If you're part of a majority, you'll have more people around that will jump into the defense. Think about it like a schoolyard, where some guys are being bullied. If they'd try to bully back, they'd be beaten up by a majority of people, since they're fewer. If a bully is attacking one of these people in the minority on the other hand, they don't have to fear too much about the consequences, since they're being backed up by a majority of people on the schoolyard."]
This is simply false. Even in your own analogy it does not make sense, because at the end of the day the kid that is being bullied (and no, being bullied is not the same as being "discriminated") is a minority on its own, and even if as a whole other people can sympathize with that children it does not makes that children less a minority in this own scope. You being "backed" or punished by other people also do not mean that the bullying did not happened, if the whole school goes against a bully in a moment this does not make the bully act disappear, if the whole society comes to defend a minority that was offended (and surprise, this happens many times, the situation is not discrimination in these cases then?) this does not removes the discrimination that had happened. When a woke discriminates other person, even if that person has defenders (which in many cases it is simply not true, people being cancelled and harassed as a minority in places like twitter is a common place) it does not remove that status of discrimination. You are only applying this conveniently to your own case, because at the end of the day people that are being discriminated are a minority on their own in their own context does not matter what abstract collective entity you think is "backing them up".
["You can apply this concept of discrimination to real life situations, where for example a person with a non-caucasion name is being denied from a job (this is probably a bigger issue in Europe), or a woman is being denied a job. The reason this works and no one cares too much about this is because they're a minority, they can be bullied, since fewer people can step in and defend them, and more people will step in and defend whoever caused this situation (fortunately things are changing in these obvious examples, but they're still far from solved)."]
The thing is, it does not "work" because "nobody cares". It works because nobody notices, and nobody notices because these people usually don't complain about it. When people complain about it, when people see what is happening, people usually respond to this kind of situation. And the other reason of why most people don't complain about these things is that even for the minorities that are suffering from it, it is not a majority thing, the minority of the minority is who usually suffers from the discrimination in a society, simply because it's also usually a minority of the population who is actually discriminative in many of the given situations (and of course this also varies in a case-by-case basis and depending on the surrounding culture). People like you see a woman being denied a job and think it was "because she was woman", reasonable people think on the problems of contracting a woman, the bureaucracy, the needs she would have that a man would not, objective and measurable costs that can affect the judging more of just "because she's a woman and I hate women", and many of the times it would be because of other objective reasons like "lack of competency", the fact that the job is not even suitable for a woman to do (like heavy physical work) or even the personality of that person. And even after all that, is what makes me the most intrigued there, is that woman are not a minority in society. In fact, for Europa Commision we have a demographic data of 2022 that there is "Almost 5% more women than men in the EU", so your own argument does not make sense here.
["Having established that, we can simplify this rule. If you're talking shit about a majority of people, you're not discriminating. You're the bullied kid who's throwing rocks at the bullies, knowing that you're getting beaten up over it, but you're able to do this (now) because you're prepared, the other minority people have also prepared to fight alongside you, giving you the edge to actually win this fight or to at least de-escalate it. That's why saying "I hate straight people" isn't discriminating, as straight people are the majority, and non-straight people the minority. It might hurt (as a rock will do), but it won't hurt as much, as straight people don't have to be afraid of being beaten up. If a bully throws a rock at a minority (someone saying "I hate gay people"), this will hurt, since the gay person can't really fight back."]
Well, you should first respond to my points to try arguing this here. You should also argue agains the own definition of "discrimination" (which is almost an universal word and have not only the negative connotation but also a positive one, of discriminating people that are bad for example, which is a good thing). This whole "oppressed violence" does not work against someone with minimal neurons working in his head, that can see violence is also contextual and if you kill a straight men because he was a straight men it does indeed makes you a discriminator in the same exact sense of killing a gay man because he was gay.
The other obvious flaw of this is that, even if we where taking this seriously, it is still a logical flaw by itself, a violent act is not less of a violent act because "other people can de-escalate it" or "because you have the edge on winning thisf fight", just as same as being a thief (and thiefes are the absolute minority of population) does not makes you a victim because people will spank you if you try to rob someone in the daylight in the center of a big population. This literally does not make any sense <at all>.
["Yeah, it does sound complicated, and I hate that progressive thinking isn't too easy to grasp at first. To understand these things involves a long process of inner thoughts and reflection about situations in live, and often times can only happen if you've been the target of discrimination in some way or form."]
It is not easy because it is wrong, it is simple as that, and trying to bring place of speech to here does not make things better. You do not, literally, need to be robbed to think it is wrong and it is bad, you do not need to be killed to think killing is bad and being killed is bad, you also do not need to be a victim of discrimination to know it is bad, you can literally see a video of it happening and your own heart will tell you "this is wrong", this is literally how morality work. When you start bringing in things like "you need to suffer to know this is wrong" is when you understand this is dogshit without any meaning, and even worse, when one victim of discrimination themself says "it can happen to anyone" (and I can find you many examples of this) progressive people start saying they are "alienated" and start putting their own ideas on top of these people, you only care of this "being target of discrimination to understand" thing when it conveys with your own agenda.
So no, you are wrong, but at least you tried to argue for it and I think this is respectable, so cheers.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@sszone-yt6vb
Yes, precisely, this is how art is supposed to be appreciated. You hold out opinion until you understand the context of the thing, otherwise you are just making technical judgements about how it was made and your personal feelings about the blind quality of the thing. Also, this is what people already did pre-art, they understood the context the artists gave, because, you know, artists where the ones usually showing their paintings and drawings online, if you can absorb something it's even better from the artist itself than by a third-party.
And the point about LLMs is bs, they don't have "minds", but even if they had, this would be a completely different discussion, now we are talking about something AI art cannot reproduce which is the individuality, the story and the context behind human art. If AI will or not have it's own of those things is a completely different question for a different discussion.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think it is possible, yes, but with some concessions.
For example, while you cannot ensure the safety of other languages libraries, you can "kind of" ensure at some level the safety of library Rust code by requiring external functions to do "transpose as runtime" and instead of relying just on the compiler to do the safety checks, rely on runtime types that should be stable between Rust versions (or at least embed a version property so each caller can check if caller/called functions has the appropriate compatible versions). So:
* The library function requires a moved struct? Rust will treat it as a normal function and let the choice of how to dispose the value for the called function
* The lib function requires an immutable reference to the struct? Fine, just pass the reference and believe the safety of the caller
* The lib function requires a mutable reference to the struct? The same as before, maybe make a custom mutable reference struct to check if it was really released externally (or just not, optimistic reliance can be a thing if your project and the library agrees on the stable version)
* How do we agree on the version? We pass a special struct containing the informations of the compiler we both used, if it is compatible (i.e. no breaking changes had occurred between the versions) we let this call happens, if not it panics at runtime (or any other way to do it).
It appears to work at least in theory to solve the problem <at some level>.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Kenjuudo
I'm treating as if it implies ontological separation because it [does imply] that, this not only is obvious from a very immediate observation, it is already well defended in the pom literature (see, for example, Searle view on the topic in minds, brains and science).
It's an "assumption" as saying "the sky is blue" is an assumption, we don't just assume it, we know it is because we see it, it's part of the very nature of perception itself. You can say those things are "different vantage points on the same recursive structure", it says literally nothing about the problem at hand unless you remove this fundamental part of human experience from your equation.
And it is, indeed, a dodge, something people were trying to do for a long time now (see for example, the weak attempt of Skinner to frame subjective consciousness as "the world behind the skin", his behavior is your process here).
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
@Demetriofim
Sobre Rothbard querer dissociar Friedman do libertarianismo, isso não é um grande ponto realmente, mas em qualquer caso, eu não o citei somente, mas também Caplan e Huemer (e existem uma série de outros). A EA, claro, possui uma compatibilidade assombrosa com o libertarianismo (que eu nunca afirmei ser a mesma coisa que a EA, embora você sugestivamente tenha escrito "Libertarianismo e EA NÃO são a mesma coisa", por alguma razão desconhecida). Eu não sei se é intencional ou não, mas você capciosamente está colocando pontos com os quais eu nunca me comprometi, como críticas aqui...
1
-
1
-
@thewitheredstriker
This is because nowadays the amount of low-level programming is infinitely less than the amount of high-level programming, which wasn't exactly like that at the time (and let's face it, C didn't last that long, most companies in the 2000s had a good part of their plastered systems made in Java and languages other than C, even banking systems used COBOL and not C, it's not like C The programming language), not to mention that in that era most of the systems we use today were being written.
Nowadays for a language with the ambitions of rust to prevail and become mainstream, it first needs to be effectively adopted, which effectively won't happen if we stop using it because it's not established (that would be a vicious circle), re-writing systems from scratch is simply impossible, the cost is gigantic and would be impractical for the overwhelming majority of corporations, hiring new programmers too, and even training existing ones would be costly, which is precisely why adoption is not so fast ( the fact that Rust has a relatively high learning curve doesn't help with that). In terms of stability, on the other hand, the language appears to have few bugs and it doesn't seem to be the case that it is static, it has an active community and people are willing to use it, think about what Linux itself would have been like if people never had given it a chance in the first place?
At the end of the day, it all comes down to cost, and it seems big companies are starting to look at Rust as more cost-effective than not using it (we're seeing movement in big companies like Amazon, and Microsoft itself recently, and even Dropbox has a significant portion of its systems written in the language), every new technology not established is a gamble until it isn't, which is why the world moves slowly (e.g. we still don't have IPv6 in, I would say, more than 50% of places in my country).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@NihongoWakannai
> it's a problem because we are real people having to live real lives not statistics on a spreadsheet.
Curious, this is <exactly> why I don't believe in government intervention. Any intervention will absolutely assume that the whole society behaves like a spreadsheet or mathematical formula that can easily be predicted and corrected and not as millions of lives with their own wills and impacts on the concrete economy.
> Just look at the inflation that happened post-covid, it was a market distortion that was inevitably going to flatten out over time and yet many peoples lives were significantly worsened in the time it takes for the market to stabilize again.
Again, this is more of a problem for you, you just said we are real people having real lives and now you are talking about the real economy as a bunch of "curves" that will "flatten out".
> Arguing "don't worry, market forces will take effect" is like saying "don't worry about the tsunami, sea levels will return to normal after it hits!"
I also never said that, you need to worry, you don't need to act. In fact, it is not even that you don't need to act, it is more that you <should not> act because you are completely unable to predict the real world consequences of any state action over the markets over time, you literally don't know if the measures you will take to "correct the market" (again, taking it as a bunch of statistics and not like a thing composed of real people, like you supposedly defend) will backlash in the long term and make peoples lifes much worse than the short term benefit.
1
-
1
-
1
-
1
-
1
-
1
-
@danielhalachev4714
Structs, non-elided generics, advanced types/lists pattern matching, async/await, auto properties (and syntatically more appealing properties in general), native iterators, extension methods, operator overloading and cast overloading, optional parameters, objectively stack-only arrays and types (structs), opt-in native memory management with pointers and memory allocation out of the garbage collector (in .NET 6+) and more.
Just the non-elided generics on itself are already huge compared to Java, try comparing the performance of a list of bytes in Java using List<Byte> vs a list of bytes in C# with List<byte>, it is not comparable at all.
1
-
1
-
1
-
1
-
@Me-wi6ym
I have a quick lookup on scala docs for those, but I don't think case classes are similar to Rust structs. They are reasonable similar to some constructs you can do to structs, but absolutely different.
The question of structs is not that they enable a certain kind of behavior, but that they are inlined in memory. Scala is based on JVM, which as far as I know currently don't support value types (until valhalla comes or something happens), so all the custom types are boxed in memory in the "worst" case (which is much, much, much slower).
Considering scala also does not have a borrow checker (but swift has an optional ownable types feature with the ~Copyable) I feel like it can model <more> features closely to the Rust API. In the end of the day I don't think neither are exactly akin to what Rust can do, but swift for me is closer in this regard because it is also considered a systems programming language.
Still, I want to use scala a bit before giving a definitive conclusion, I used Kotlin so far but not scala yes. I think Kotlin is very close to swift in the syntax (different in many aspects, but close in many others, I personally love the trailing lambda feature of both langs).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@user-qn9ku2fl2b
It is not a paradox, that's why I said this here:
["But this appears to be a one-sided reasoning, because it can work the other way around as well."]
For me, this whole "it is safer" point is a problem on itself because it is one-sided (it has problems both with statically linked and dynamic linked).
And my point is that a partially out-of-data system <can be> safer, not that <it is>, neither a necessity. We've seen something very relatable to this some time ago with the xz thing, think about the time you need to just update a distro version of your xz containing the terrible vulnerability versus the time each individual package would had taken to update their own statically linked version of the xz package if it was the case it was statically linked, and you can see part of my point. Obviously, my original point says "one-sided" because I also recognize the other way around, where the vulnerable xz was linked statically to many packages and then you are now vulnerable until they update their version (or you rollback yours).
Summarizing, my vision here is much more "this is all nonsensical bullshit" than "you should prefer X to Z (pun intended)".
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@rt1517
1. It is related to the comment in the Rust code bro, what are you saying? It is pretty clear why they made this decision in this context, and they stated it explicitly. And as I agree with you that this <can> be <potentially> a problem in <some> situations, I don't think it is a hard and strict rule of how things should work. As I said, they commented on why they can't verify and return these errors, they reasoned about it, but even so you should not blame the feature as a "design flaw" just because of it, in the same way we don't blame any language that has automatic resource management (including defer) as a flaw because they can't avoid some pitfalls (like C# automatically disposable structs lacking a owning mechanism or verification one), a programmer should them take care in cases this is important and verify manually. In the end, it does not matter at all if you don't care about the result, which is the case in most userspace programs, as I said before.
2. ["C has a lot of issues and a new language should be introduced in the kernel.
But Rust, while it solves or tampers some of C issues, introduces some other issues.
Linus seems to have not spent the time to learn Rust enough to anticipate these issues."]
First of all, yes, Rust has it's own issues (as all languages have, there's not a silver bullet in programming and things don't come freely), but you are underestimating Linus and overcritizing Rust by some things that frankly are just bs most of the time and that are being constructed around in many ways, so the question is not it "introduces some other issues" or not, but wheter it have value to add (which the language has in many ways, including new developers openness and important native safety features C lacks). You can say Linus "didn't take the time to learn", or you can recognize that he is a skilled developer with good contacts and that is actively reading Rust code to be merged (as he itself said in the video), and that this is constraining the directions of the language in the matter. And as a matter of fact, we can say that Rust is <appropriated> for kernel code because we know Asahi graphics drivers were writen on Rust and they are working pretty well (and the main developer said it was objectively easier and more reliable to make than with C), we know that Rust is being written on Android OS code and it is giving Google less problems than before (read the report), and we also know that Rust was indeed used for developing entire kernels before (so it works on this matter specifically as well), so I don't think you are the big reference here on critizing the appropriate usage of the language or to say they needed to "create a new language for their use case, kernel development", this bears arrogance.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@isodoubIet
You have a problem in understanding, I see.
My claim was: ["Plus, function coloring is required whenever you <use any type system at all>, and it is not a problem because the Rust type system can infer the types for you, so most of the time this is not a concern."], learn to read things entirely instead of nitpicking parts. Function coloring is a defacto thing for any language that has types, including C++. It do not mean that the errors will be part of that function coloring, it just means it <exists> in the language when you are using it, if the point is that <function coloring = bad> this should be a problem for you.
> Also nonsense. If you change the error handling strategy of some function deep in the call stack, you'll end up needing to refactor an arbitrary large amount of code.
Just like if you suddenly need your exceptions to matter more locally you need to refactor an arbitrarily large amount of code to catch them in every possible single call site. It also matters more for me if you are designing your systems so badly this is often necessary, but anyways. And if you use Java for example exceptions can be <function coloring> as they need to be marked in the header of functions (and if you skip them you need to mark your function or convert it to an uncked one). This is just a terrible point in defense of exceptions.
1
-
1
-
1
-
1
-
1
-
1
-
@lpprogrammingllc
See how funny this is?
You came here, randomly shitted on the language by saying they had "broken promises" by showing a compiler bug, and you cite Residual Entropy as your font, which I had watched the videos, and he seems to be very understandable on the problem, because he understands the difficulty of what it is in hands; now you are saying I'm on some kind of "rust advocate behavior" like this has ANY meaning whatsoever (spoiler: it don't). You say I'm "assuming bad faith" when "someone doesn't like what I like", and by assuming this, you are also assuming bad faith on me by thinking I'm doing this instead of having actual resons to believe so (and spoilers again: I have, and in my previous comment I cited some of them).
No, this is not a language-level bug, because this does not make sense at all, the language does not even have a formal specification to have "language-level bugs", the bug in question is a product of assumptions they needed to make when implementing the current trait solver and obviously it is not an intended behavior by any means (it is literally catched by MIRI, so it should not be an intended thing; <and also> the language, as you said, is <promising safety> in <safe code> it should imply by charity that this is not intended to pass as sound code, while it does because the verifications where not properly implemented in the compiler-level), so it is indeed a <compiler bug>. It is a compiler bug that cannot be easily fixed because doing so requires modifying many assumptions in the compiler, because this is a complex bug, but still a bug that is being fixed (and has already a fix in the new trait system), so calling it a "language-level bug" is just mean. You cited the bug report as a proof that it is a language-level bug and, for zero surprises, it does not imply that anywhere.
I'm open as well for proofs that this can be classified as a "language-level bug", but more than that, I'm more interested in know how this change anything to anyone interested in the language when the developers are already dedicating their work to fix this bug.
Yes, the bug reports are still marked as open because they are not yet in the stable language and because the new trait solver is not yet stablized, I also don't know when it will (but I've read in their roadmap for it that it will be ready for 2027), but it is being worked on, and as such is not in good faith to say they had "broken a promise" because of such a complex bug existing (a bug that has 0 records of being found in real codebases until now; a bug that can be catched with MIR which is something you should <already be using> for really ensuring your code has no detectable safety problems) that is <actively being worked on> (i.e. this is not a thing they "forgot" or "ignored").
As for this:
["Again, this is orthogonal to the real reason I will not use Rust. Which is the complete lack of trust I have in the entire Rust supply chain, because of people acting like you."]
You are free to think the bs you want to think, and to say people responding to your lies they are acting "rust advocates" (when in fact you where lying, you said it was "unlikely to be fixed without serious breaking changes", you had not even readed the material available on the problem to say that and you proved this on your posterior comment).
Either way, I'm not "rust advocate", my main language of daily use is not even Rust, it is C#, and I program in many languages, I'm not more "rust advocate" than I'm "truth advocate", and you are indeed acting in bad faith with your comments, you are being weird and shitting on things (you literally started this with your comment by citing something you DON'T understand, you literally pasted only a part of a function that <is not very unsafe> without the other part), this is not the behavior of someone who are really wanting to have a purposeful discussion over a topic.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Bebinson_
It seems to me that after you said you weren't going to try to say how the American DMV system should work, but after that you actually suddenly said how the American DMV should work, so your statement seems just discursively empty.
This type of problem has absolutely nothing to do with the government having or not delegated a function to a company, it has to do with the nature of that function and the very notion of delegation. Delegating means granting, it means that you pass the authority of a certain task to a third party, if you read between the lines that means that the government has basically moved this burden from it to a company, and that fundamentally does not solve the inefficiencies that would be seen in the government itself. This improves the chances that something good will come of it, as companies tend to be more efficient in the way they do things, but as long as this is still a concession it is still a right to be the only one to do a certain thing, and that's what government is, and it encourages irresponsible and bad behavior like this. So the solution should not be to "nationalize" this task, but to free it up completely and allow each company to provide its own solution to the problem, and allow them to compete on their solutions in order to establish a higher degree of quality than than the government could (or worse, as the case may be). This is the only fair way to establish whether private contractors are really worse or better than the government.
1
-
1
-
1
-
1
-
1
-
@amputatedhairstrands
Brother, I literally live in Latin America, are you saying that you know more about my country than I do living here? If this is capitalism then the United States is flying super-capitalism, because there is literally nothing of a free market here that doesn't come from the gray and black market. I selected Yemen and Sierra Leone because they are some of the countries with the highest rates of hunger on the planet; they (along with a few others) concentrate most of the numbers, that's why I chose them. And I don't consider that only first-world countries are capitalist, but the notion of capitalism has been so distorted that they call countries like Brazil (where I live, and I really doubt other latin america countries in general are any better considering our news here about them) capitalist, when to open a tiny company we have to go through an unimaginable amount of bureaucracy and literally fight against what we affectionately call 'the lion' (aka the state) to survive. I would really love it if you could cite for me 3 countries that have a wide free market, freedom of trade, and political freedoms that are going through these terrible conditions that you mentioned, because it doesn't seem to be the case that the countries most cited in the rankings of hunger, death, and violence can be classified this way. As for your final statement that capitalism is doomed to collapse, that's socialist bullshit, but I don't expect anything better than futurology coming from you guys really.
1
-
1
-
1
-
@amputatedhairstrands
Well, now I will respond. I understand that in your case, the Brazilian health care system has helped you, I really do. However, this, for better or worse, is still anecdotal evidence. Just like my grandmother received terrible healthcare through our system when she was alive and eventually passed away, and many people in my family have suffered similar fates, unfortunately (as have thousands of other people). I'm also not going to say that our healthcare system is trash in its entirety; it naturally helps millions of people every year, and that's wonderful. But it's also not the be-all and end-all, and what they sell on the internet is not compatible with the reality of thousands of people who are actually dying in its queues. And while it's true that healthcare treatments tend to be very expensive, I recommend caution in blaming the market for this and not the real problems that are well known, such as patents (one of the main barriers to entry in the healthcare market, basically because you're at the mercy of some large corporations protected by the state to acquire some essential things in the treatment of people), regulations and bureaucracy (something that makes a huge difference, and that can be the difference between having a more accessible service and not; there are hundreds of thousands of small barriers that accumulate when it comes to creating your own healthcare service), and all the tax burden that we already know about. So, I wouldn't say that the 'free' Brazilian healthcare system would be the only possible solution for cases like yours (although due to world conditions, it is your only option); it seems hasty to blame the market for something that is essentially caused by state actions.
When I said that Brazil is approaching socialism because of its high bureaucracy and limited freedom, it wasn't an attempt to literally equate the two things, but to say that bureaucracy is a reflection of socialist thought being transferred to the state. Bureaucracy implies the indirect control of the state over production and distribution, which represents 'degrees of socialism' over what would be considered our 'capitalism.' When they start to put dozens of barriers and compliances to be resolved so that you can have a business, and they put you under the gun (metaphorically and literally) to lose all your assets in case of non-compliance (as happens with the very high fines that are imposed when companies do not comply with these bureaucracies), the difference between what we call the market here and a centralized economy starts to blur more and more, and it's in this sense that I attribute this issue.
As for the problems of corruption and money going to the wrong places, I agree with what you say, but see that the problem turns out not to be simply 'evil capitalism,' but rather a series of other considerations that need to be taken carefully when analyzing these issues?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@thesun9210
I don't know how a dragon is, yet I can point you precisely that a dog is, in fact, not a dragon. You only need partial knowledge of the thing to be able to point negatives, in fact in logic there's a kind of proof that you do by literally eliminating possibilities (abductive inference).
We obviously don't agree on a specific definition of consciousness, but it is completely INSANE to say we don't know what it is, it is the common experience of all humans and something we all know from it being literally in the roots of our existence. If I say "conscience is a little rock in the park I saw the other day" you would immediately say this does not even make sense. If I say conscience is being sad you would have an intuitive understanding that sadness is a conscious state you could be in, instead of being the consciousness, if I said consciousness includes some level of awareness of yourself or others you would understand that, based on your experience this, does, in fact, make sense, etc. We can't define it precisely but we can ostensively define it without much controversy at all.
1
-
@thesun9210
It literally makes it not a dog, because if a dragon was a dog we would [call it a dog], not a dragon. I have partial knowledge that dragons need to have long tails, they need to have scales, I have the knowledge that there are reptiles that we call "komodo dragons" because they have similarities with what people in mythology calls dragons, we know it is probably not a canine so I can rule it out. This is a reasonably not controversial partial knowledge about the thing, and if you came and said "well, a dragon does not have those characteristics" then I would say your dragon is another entirely different thing and that you either should chose a better name for it, or all other people should. This is how we arrive at sane conclusions even about things we are not certain about.
And yes, the nature of consciousness IS as common, and it IS extremely relatable, the fact that individuals have differences is not a contradiction about that, we are talking about what consciousness it, not what a self is, not what a specific instance of a consciousness is. And yes, I can say my consciousness is the same as of a schizophrenic person, a schizophrenic has the same basic characteristics of my own consciousness. It has awareness, it has subjective experiences, it manifests in first person, etc. What I cannot say is that the schizophrenic has a perfect understanding of the world, which is a specific feature of consciousness (which is also not the same as to say he does not have one). But we also need to remember that if a dog has it's tail cut, this does not imply the definition of dog does not include a tail, a dog is a being with a tail and if a particular dog does not have a tail it is more of a matter of accidents happening with the thing than with the definition being wrong, it's a question of ontology.
Consciousness is a universal thing, experienced universally (even if accidentally differently in aspects) by all humans, it has objective commonly shared characteristics we can talk about and understand about eachother (in fact, human communication would be impossible if this level of resemblance did not existed). It is as abstract as it is concrete, and we know it exists because WE HAVE IT. And we have no reason whatsoever to think AI has any kind of consciousness.
1
-
1
-
I believe your calling a study an "objective fact" has not helped the perception of PrimeTime. The exorbitant amount of lying studies out there is no joke, there has literally been a crisis in the social sciences caused by statistical lies, so skepticism is always welcome.
That said, it seems reasonable that you'd be generally more productive if you had to exert yourself for less time, because just as muscles can't exert themselves for too long before becoming less efficient at taking action, the brain probably also can't focus for too long. before you get tired, however I think the bottom line is not the time you spend working, but the disposition you bring to the job, and the amount of effort you are willing to invest (in the same sense, muscles grow when they are exhausted of their strength, and the brain probably becomes more proficient the more you develop it, so we should see if in the long term these people which was working 4 day work weeks were actually better at their office than programmers working more time).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@AkiRa22084
Yeah, I'm against copyright in general. Copyright actually limits human criativity, not potentialize it, because you cannot iterate on existing things (and human criativity is defined by arranging existing ideas into novel ways), you will never be sure that something you created can be sued because some crazy man created a patent 5 years ago just to sue someone when that one have the idea and make it practical (and yes, this happens). Patents are also part of the reason why U.S. healthcare system is so damn expensive in relative terms, and it is the reason of why many times we just have some common sources of medicine that share the same roots and needs to pay for their production.
Humans where creative way before the notion of intellectual property/copyright ever existed, and they will be creative without it as well, we are not robots, our minds are naturally creative and this creativity should not be cerceated and treated as property. It is also clear that intellectual property is something bizarre and strange when you need to say to people "you would not steal a car" when they are pirating, because nobody really think on pirating things (or copying them) as "theft" in any meaningful sense, so you need to fabricate it by bringing real theft into their minds (like actually stealing a car). The fundamental difference is that nobody loses when you copy some idea, when you use it against the "IP owner" will, nobody is losing nothing, nobody is being really violated, the idea of the "IP owner" is still on his head. But when you actually steal something, somebody <will lose> that same thing. When you recover a car that was stealed, it is obvious that you are making justice, but when Nintendo sues a mod maker because he used "their" characters in his mod your natural morality will tell you that this is wrong, that's why you can agree that some kinds of copyright are "draconic" (as you said) but you would hardly say someone protecting his property against a trespasser of any kind is "draconic".
1
-
1
-
1
-
1
-
1
-
@dave7244
Well, let's go. I'll take this a little more seriously for the sake of discussion.
1. The hello world program doesn't generate 12mb, I literally created one just to check if that was true, in total we have 5mb including configuration files, debug symbols and the git repository that is created by default. The final release mode binary is around 125kb.
2. This is a pretty ugly thing to assume. See, if the fastest and most efficient programs are the ones compiled in -O3, why isn't that the default? The answer is that there are reasons, reasons that need to be carefully balanced by the programmer when creating a build. I've done C builds that quietly took up over 300kb with just a little code, does that mean there's an inherent problem with the language?
Rust has levels of optimization and balance, when you use the standard library you are using code written very carefully and that makes a lot of use of language resources, code that will be statically included in your binaries when compiled, this increases the performance of program execution at a cost of a little more space. Do you know what else increases performance (in their respective cases) at a cost of space? Inlining. Rust also does a lot of inline in the code, not only that, it also focuses on doing a lot of vectorization and loop unrolling where possible, and as expected this results in bigger binaries, does bigger binaries mean worse code in this case? Hardly anyone serious would say that. This is a trade-off of space for time, something quite simple for a developer to understand.
Rust also uses LLVM, I've heard in the past that LLVM would produce bigger binaries, for better or worse, it's something that needs to be considered, of course with the new GCC based compiler (when it's stable) we can get an idea if this is also a point to consider.
3. Finally, if size is all you care, we can make the binaries smaller, using some profiling tools to eliminate the static linking of binaries with stdlib, you can make your hw 99kb, a 20% improvement compared to common builds.
Your question should be, do I really need to do this? Is it so absolutely important to save 150kb in a world where 1GB of hard drive can cost much less than 0.01 cent of a dollar? That sounds like a pretty hollow critique when we put it into perspective.
Assuming you're talking about using this for embedded, one of the language's proposals, that would be a valid thing to say... if it were actually true. As you would with C or C++, if you forgo the conveniences of the standard libraries, you can create very small binaries to use in your embedded ones, so the size of your hw binaries should be close to a mere 9kb.
---
That said, I still expect to see positive changes in the switch to the GNU compiler, and time should bring even more exciting things with it.
1
-
1
-
1
-
It seems to me that this whole discussion is really pointless.
It's pretty straightforward to work with the feature model, and it doesn't tend to cause as much conflict or unsyncing that people who advocate it do say, it's just a matter of making "features" point and to the point, and when features get long enough you merging, not the feature into the dev, but the dev into the feature, thus keeping the two in sync in exactly the same way as if it were a short feature, but without the problem of merging something incomplete into the dev.
Briefly
For quick and simple features:
1d. feature -> dev / Fast, easy and straightforward, you developed it and then integrated it
For longer, more complex features that require intermediate steps that are impossible to skip:
1d. feature
2d. dev -> feature / Keeps the feature in sync with changes in the dev, fixes any conflicts that appear (which should be rare)
3d (finished). feature -> dev / Finished, will probably not have any conflicts and will be complete in the sense that it resolves a given change completely
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@an_account-o2o
I don't know what science you're talking about, darling. A quite significant number of theoretical aspects in physics have no concrete evidence, yet they are still credited as "true" (at least in the pragmatical sense). This is simply because they fill voids in existing theories. Not to mention we're not talking about natural sciences but social sciences, where the amount of material evidence tends to be even scarcer. Statements like 'we don't know, therefore it doesn't exist' are worth as much as nothing, and we are constantly coming across new evidence for all sorts of things (for example, recently some curious articles have come out challenging several basic and well-defined notions about the origins of humanity; we've also had several changes in perspective in this regard over the years). Economic laws are derived from laws of the human action, and in this sense, they cannot be subjugated by the absence of evidence. The limited understanding of economics that anthropologists have is not an excuse for things to be different (just the fact that real-world economics works should count as excellent evidence that we're not talking about 'myths' but things that must be real and foundational for modern economies to function in the first place; people effectively deal with money as a medium of exchange, and they can effectively use non-monetary means as mediums of exchange, just as it is possible to find the evolutionary process for primitive forms of money when they do this). There are many other complications about this type of 'argument from absence' that could be made, but this should suffice.
1
-
@adampliszka4855
It's just a word, my friend; does it bother you? I admit it's a verbal habit of mine, but I don't see it as problematic.
Regarding your statement about methodology, it's true but it's also literally what constitutes the limits of scientific epistemology. You can only know something to a certain level, and that doesn't stop scientists from creating more and more postulates. By the way, you can demonstrate a negative; you just can't evidence it. If I can logically prove none of your ideas make sense if what you're postulating is true, that's a great way to prove something doesn't exist without negative evidence.
['A quite significant number of theoretical aspects in physics have no concrete evidence' - An example maybe?]
I don't get it, why the need to imply 'universally accepted'? As an example of a non-'universally accepted' postulate that has great traction in the scientific community, we can mention dark matter.
['Economic laws are derived from laws of the human action, and in this sense, they cannot be subjugated by the absence of evidence.' - This just seems like a non-sequitur.]
I see no logical necessity in what you've put forward; economic theories don't 'absolutely need evidence.' As clarification, I can cite Austrian theories of economics, derived from a priori aspects and applied to real cases based on their validity. We can discuss praxeology, for instance.
['the fact that real-world economics works should count as excellent evidence that we're not talking about myths' - First of all, 'work' in what way and what specific parts?]
They work in the most pragmatic way possible; they happen as they were conceived to. When economics touches upon laws describing the decrease in real value in favor of nominal value due to inflation, or when it describes subjective exchange processes according to the intersubjective preferences of the parties, etc, these are evidence for what is described. And more, when you don't agree with the "goblins explanation" just get there and refute it, as you are talking about empirical matters you can use Occams razor to get rid of these pesky goblins and proof that the more mathematically oriented and clean relativity explains gravity better than the 'goblins', it is the same with economy, if you think praxeology is wrong just give a proof that it is indeed wrong and break with the austrian theory of economics. Write a paper and send to me, you would be a hero for some socialists and other anti-market people.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
It is surprising that a channel called "how money works" did not antecipated a fatal flaw in this reasoning.
Money is not something given, it is not something that has intrinsic value, your points isolate the fundamental nature of how money flows and gets its value to a series of simplistic assumptions. Rich people cannot be the ones that will "buy everything while the poor usufruct from the free stuff" because rich people money don't come from thin air, rich people money comes from the people, the same people you said would be getting free stuff, buying their products. If things are free you literally removed the essential part of this, you removed the rich money. Without rich money you can't sell to rich people, and without you being able to sell to rich people the problem stays exactly the same, you did not answered the question in the first place and everything you said falls here.
The games market is not an "example" of your theory, it is not because the purchases in games lack a very essential property that exists in real life economies, which is scarcity. The cost of letting people play free the games is absurdly lower than the cost of producing a good and giving it to people for free, and people don't play freely the games, free to play games are literally and absolutely known for having a whole bunch of ads on them, advertisement for what? Yeah, advertisement pays because it makes people <buy> products, and the advertisement helps to alleviate the costs of having those free to play servers. Without people buying products you don't have value in ads, you don't have value in letting people play for free (and that's why almost all free to play games have them in some form or another).
The idea of UBI also don't solve the consumer problem (before someone try pointing this), this is not different than the flawed socialistic notion that you can decide for people "what they need", it is not a response to the organic production and distribution of resources in market based on the consumer needs, the own feedback of consumers would be permanently distorted with such a thing.
1
-
1
-
1
-
1
-
1
-
@s3rit661
I don't think this is "the same argument".
1. Checked exceptions are, in fact, optional in Java, you can quite literally ignore them and turn them into unchecked ones (and many of those that are actually important, like NPEs, are ignorable). Kotlin is a language that runs in JVM and literally ditcched checked exceptions, if this was a concept working in Java I don't think they would go to this route.
2. The Rust thing is about <memory safety>, not about "ensuring you never make mistakes", memory safety is a no-problem in languages like Java and Rust ensures you don't make mistakes with memory with the borrow checker, which is another thing than error checking mechanism. The guarantees you have there are extremely specific as well, it is not "it is impossible to make mistakes", it is "if you use the type-system, if you check your unsafe code with tools like MIRI and do proper tests on it, if you don't circumvent borrow checker rules, them you are in the possible program space where it should be impossible for you to create a memory safety issue in your program". The error as values are a bonus and they work pretty well with this system.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@asdfqwerty14587
I'm sorry, I don't meant to say that everyone should produce their own things. Maybe I misunderstood your response?
But decentralization is also about division of labor, so the question is more about:
No, it is not better to have a centralized economy in any circumstances, but this don't mean to abandon the same principles we already know that work very well like the labor division. I think, in this interpretation, the main commenter is conflating decentralization with "producing your own things entirely" and it is also worth of a response.
But if we interpret the commenter response maybe more charitably, we would probably understand "people producing their own food" as "people organizing the production structures in a more closer stance to their own needs than with big corporations", and this would probably increase the cost (as it would not have the government funding big corporations have) but would also make for a more resilient to failure production. This should be more reasonable, if this was the intention.
Either way, my response was more about the centralization thing than to defend the "produce your own food" insanity.
1
-
@thomasmarek7310
Well, so I think your vision is really more aligned to the interpretation I gave in the second half part of my last comment.
I think, although, that this vision is both interesting and somewhat problematic.
Needing food and products from somewhere else is not a sign of centralization, the market is pretty much decentralized (as centralization means central coordination, where with markets you have dozens of thousands of independent decision making units and the feedback of the consumers itself to guide their decisions), but I also think that with the interdependence (which is different to centralization) can be a problem for the instances you pointed out (it can be a problem if somewhere in the chain there is a problem). I still think, albeit that, that the problems we experience are generally unavoidable to some extent, it could be your community the affected one by a very problematic destructive event, in which case your community would be then dependent upon other productions, so you should be careful to which extent you move this reasoning. Many of the distribution problems arise, however as interventions from a centralized entity (i.e. the state), but those problems require a different analysis.
Some things are also inherently problematic to do in most places and you need to export their production to be able to have those things (like even water in some places, but this can generalize to a variety of commodities), so even the application of this idea needs many exceptions, and I generally think that producing for external consumption you can achieve higher degrees of mutual benefit (as you can consume products that the other agents are very good at doing while producing things that you are very good doing, or even producing things that you are not so good producing but consuming things other countries are very good at producing that will benefit you). Those are general principles of economics, and I think there are tradeoffs with most things, so we kinda have to take the ones that appear to have the most benefits.
So yeah, I think definitively it is good to have local production of many things, and to avoid relying on government funded big corporations that are not in the normal flow of the decentralized market, but I think we should carefully pick the conclusions of which of those things are truly centralized (and generally worse to rely on) and which of those interconnected things are acceptable tradeoffs between relying versus internally consuming.
1
-
1
-
@elLooto
Fair enough. But you are conflating physics (which concerns itself to be able to identify mathematically the general laws that governs physical systems) with economics that are a human field and do not possess such capacity. Every single model in physics has the objective to be able to generalize to a class of phenomena and be able to apply to it, so the laws should be able to precisely describe what will occur in the pond when you throw the rocks into that pond (even if they don't give you the exact answer of how many ripples would form and their exact waveforms), we still know how they will behave and we are able to predict correctly what will happen. If physics where unsound we would not be able to build structures or machines, computers would be impossible, so we know for the efficiency on actually predicting physical phenomena very accurately that physics is not unsound (and, if some event in reality contradicts a physical prediction, the whole theory is generally revisited and can change drastically).
When we are talking about economy, however, the mathematical models do not possess such predictable power (as they more than often just fail), and they also can't be expressed as general laws as they are arbitrarily constructed to fullfil a target and model an entire society at once (if we are talking about macroeconomy, at least), from them no generally appliable rules that can break the entire theoretical basis can be drawn or refuted, they can only fail or be reasonably succeed at the time. What you are saying is that the models can't even be applied to real world because they are unable to be proven due to the ripple effect, but to even prove that they are accurate descriptions of economic behavior you need to apply them into the real world in a way that is sound, that's why a possible implication of this is that these models are unsound. Obviously, this do not means that economics don't have fundamental laws that describe market processes, but integrating these laws with the economic mathematical models is another beast that requires integration and epistemic justification.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ekki1993
Really, this is not true, not entirely at least. I'm not saying that corporations and colonialism had nothing to do with the wars in these places; they certainly did, but this is far, far, extremely far from being the whole truth. Wars are extremely common in these places; Africa itself, which was one of the most colonized places in the world, was steeped in wars and bloodshed (to the extent that a significant portion of slaves sold to brazil in the 19th century, for example, came from prisoners of internal wars between the tribes themselves). Similarly, there are various groups (including groups with socialist biases as well) that were responsible for civil wars and the destruction of peace in these places, and there are also dictatorships that are related to external economic interests like you stated. Trying to reduce things to one spectrum while ignoring all the instability that had to exist for all this process to be possible in the first place is ideology, and it is effectively not understanding the problem. As for the notion that it's 'profitable' to mess with an entire country, this is neither guaranteed nor predetermined; it may be the case in some instances and may not be in others. There's no guarantee of profit when you're destabilizing an entire society, and this can end up backfiring on you in the end. Lastly, just to complement this, the only way for these processes to be profitable for corporations is if they have the support of the state (this ties into what I mentioned earlier about the possible insecurities of the process), and if you carefully analyze it, you will see that in all cases there was a very significant involvement of the same.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@skaruts
> But the point is, do you really want people to enforce property ownership and contracts themselves?
Also yes.
> And what happens when you fail to enforce ownership of what's rightfully yours?
The same exact thing that happens when you have a state, you seek for help from your neighboors and family, or call the police (the private policy in this case). And to be even more fair, I live in a country where if I got stolen the policy would literally do nothing, when it cannot solve < 50% of the homicides counting on the state for solving robbery is trully an act of generosity.
> What happens when you're the weakest?
I make friends with the strongest.
> And what happens when people can't agree on who is the rightful owner of something?
They discuss and settle it in a common ground, like ALWAYS.
> Which part of "you need the state to enforce contracts peacefully" did you not understand?
Again, you did not proved it.
> Are you trying to argue that if I take what you spent the last year working on and pretend it's my work and just go around selling it, that it's not theft?
With take you mean you will remove it from my possession? If so it is still pretty much theft. If you are talking about ideas, then no it is not theft, but telling you made it when in reality it was me (or the other guy, in this case) still pretty much configures fraud on your clients, which is clearly liable and a violation of their property rights.
> You're literally saying that your work isn't naturally yours, and that someone taking it willy-nilly is not theft.
Nobody said that.
> There's no such thing as inherent human rights. Rights don't enforce themselves naturally. If there was no state enforcing rights, there would be no rights at all. People would have to fight for what they believe, or hire muscle to fight for them.
You are free to believe whatever you want, but nobody said for a right to be natural it would be "naturally enforced" (whatever you mean by that), by natural rights we mean the rights we discover investigating the nature of man, the things that are objectively right or wrong and this will give rise for their enforcement. The state is merely one of the many possible actors that could enforce rights.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@blallocompany
1. We have past studies without this limitation that arrive at similar conclusions, I remember clearly this being a thing when CoT started to become popular, the models reasoning were not part of the reason they answered what they answered, and many times they would "reason" correctly and arrive at a completely different answer later, confabulating a response.
2. There's nothing specific in this paper mentioning a limitation of 1 single token, they asked for a direct response but the point was never for the model to just SPIT a token, the point is to test wheter or not the model is able to THINK before spitting the token, the reasoning being after the fact is literally to test wheter or not it is able to arrive at a correct explanation of how it did it + if it was able to identify the process itself. If you ask a human how much is x + y, even if you ask it to only spit one word, the human will still pass into a mental process to arrive at it and then you could ask the method and the human would say correctly the method.
You don't need to be able to "introspect how you """generate""" a word", because this is a pointless question, you don't "generate" a word, you CHOSE a word based on what you are thinking and how you want to say it, there's no disconnection between your thinkering and your speaking, when you are asked about X this is already in your flux of consciousness, you already know the options and then you just pick them. Words are chosen by volition, which is a deliberate act of will, and most of the things in the flow of thinkings is rationally explainable by yourself.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@PrajwalDSouza
I did some tests with it, and while it is clearly way better in "reasoning" (if you think this word apply here) I don't think it is qualitatively better as people are pointing out. I tricked it pretty easily for example with some indirection in the problems I gave it, like try formulating a problem with a deceiving conclusion that anyone would catch if they really solved the problem but that is not obvious enought so it is untangled from the bigger problem, it will probably fail and try to justify the tricky part within the framing of the bigger problem instead of noticing they are contradictory.
I think the "memorized how to reason" in the video is a good way of putting it, even if not entirely accurate, for this particular reason.
But anyway, glad to be proven wrong and hopping the o1 (not preview) will be better, if it really solves problems in physics then we will have many advancements in technollogy very soon, if not, well, at least it was a funny toy.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@shinobuoshino5066
It depends on what are you talking about "proof". Rust is literally logically checked for safety, so in this matter I just think you are blatantly talking about something you don't know. Rust, of course, cannot solve <all> classes of safety bugs, but within the borrow checker rules and the default strictness of the compiler it can <ensure> you the safetyness and correctness of the code as it is written by the logic it is written in a great deal of cases. It had proved itself on Android as well, as 20% of all new Android code is being written in Rust and google itself reported that there is almost no memory unsafety bugs discovered in this new Rust code, on theirs report: ["To date, there have been zero memory safety vulnerabilities discovered in Android’s Rust code."] (2022 report)
While this is not a definitive proof that Rust will help in practical code to solve all issues with memory safety, heuristically we can say so, because the safe-checked language is formally sound, and because we know a bunch of companies are adopting it over the time and they are not generally going back into other languages because of security problems.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@pxlbits6442
It seems to me that you are oversimplifying the problem here. Although it is not strictly true that the USSR suffered from extreme periods of famine after its initial periods (and even then, what should that imply? The existence of famine in those periods was still a fact), it is also not true that everything went wonderfully. Problems of distribution and resource allocation were common in the USSR even after it had industrialized, and this greater stability of the USSR's economy was often associated with various factors other than socialism, such as its vast network of black markets that supplied the population, the fact that various market concessions were made to private actors, the fact that they imported food and resources, and even prices from countries with freer markets among others. There are other fundamental issues at play here too, such as the issue that calorie intake implies nothing more than calorie intake, if you only eat fat you will be ingesting a massive amount of calories and still not equating your quality of life with the quality of life in capitalist countries. The fact that socialist regimes also have an infinitely smaller variety of food and choices is also a crucial factor here, considering that this impacts people's quality of life. It is also debatable whether they really had more nutritious food quality than the United States; I would like to see the sources of this particular claim (although it is also debatable whether this makes much difference, if all you have to eat are cereal bars and carefully planned rations you will probably have very high levels of nutrition and reasonable amounts of calories, but the price of that will be your freedom). So slow down, things are not exactly like that.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@pedrolucasdeoliveiralara8199
Sorry for the delay, I ended up forgetting to answer and I only remembered today. That said, let's go.
1. Yes, C is a great language to learn the basics of programming and the way computers deal with pointers and the like, assembly is also a good way to get into the more intimate aspects of computers, especially if you're dealing with microcontrollers.
2. (a) Your statement is false, simply completely false. I used high-level languages in my career for most of my life and my long years in the profession. And I can definitely say that Rust is an excellent and nice language even in the face of high level languages like C#, Kotlin or Java. Having used almost literally all of them, I was able to observe how they are all suited for each type of service, in that sense you are simply wrong.
(b) Next, Go does not have the same performance as Rust, I personally hate arguments that take into account performance, but in terms of it Rust is on par with C in several cases, and in most of the existing benchmarks it surpasses Go. Now, I must say again that this is irrelevant to me and the 99.9% of cases people will need it.
(c) Rust will help you understand the architecture of programs in the sense that the compiler will enforce strict and well-defined rules about memory usage, when using Rust the average programmer will have contact with concepts which he would only have in very specific cases by following bad coding standards in other languages, you will learn the basics of message passing and because of the borrow-checker and immutability by default you will learn the benefits of using it. These concepts are part of what we conceive as "architecture" in software, the way in which projects are structured.
(d) I don't understand your point about "smart pointers", Rust by default doesn't use them, the borrow-checker does this process at compile time, but they can be used in cases where the memory property needs to survive longer than the standard. However, I never said that Rust would teach anyone about "memory management", I'm just replying because it piqued my interest.
3. (a) I believe I just refuted your claims by saying that I found Rust to be an excellent language after using almost literally every other language, and I'm being honest, I've used Lisp, C#, Clojure, Java, Kotlin, Lua, JavaScript , TypeScript, PHP, C and even a bit of C++ over the years, there's the right tool for every job, and I can perfectly see how Rust fits into the most diverse development cycles.
(b) I can grant the point that Rust is especially useful when you're dealing with low-level languages like C/C++, because it simplifies and secures many otherwise more complex processes in them, but I can also see it being used for higher-level purposes, in more or less specific cases (this is literally embedded in the language, it is meant to be both high-level and low-level). Rust is great for making the structure of a program behave predictably (due to its error handling), it is great for preventing common mistakes with concurrency (the borrow-checker is able to eliminate data races), and it is also great for when you need to design a system with low latency. It definitely has its value and even C# designers are seizing on some of its concepts to improve the shape of the language in some areas.
(c) Finally, I don't know where you got the idea that Rust is one of the hardest languages to understand, that's a rather strange statement. The only things difficult to understand in Rust are the coding standards that are imposed by the language, such as the notion of keeping things immutable by default and the data ownership structure imposed by the borrow-checker, when understanding the most basic concepts of Rust language and understanding how programs are structured in it, anyone should be able to produce something. My experience learning the language was incredibly enjoyable overall.
Addendum, I'm not saying that Rust is always the right tool for the job, if you can do the same task more simply with Go than you can with Rust, you're probably better off using Go, but there's no point in saying Rust is not worth learning. Learning a new language and understanding its concepts increases your own understanding of other languages and gives you access to ways of thinking that could previously be unknown, in this sense it makes absolutely every sense in the world that, if you have the time, availability and desire, learn Rust.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@craftylord3336
You use = to set these types that are what I so called primitive types. And even if they were not primitives it would still make sense because their memory layout is entirely flat.
When you write the number 100 the number is fixed and cannot be changed, every single bit required for it's identity (which is value-based) is already here, so you use = to set it to another number (the same for the other ones).
Also, about the function called copy vs function called "pointer" I don't get it, no single language has a function called "pointer" because it does not make sense, a pointer is just a reference to an object somewhere in memory, underlying it is a number (like an 'int') so when you call 'copy' you are really copying everything that is flat there (including this int), just not the pointed object itself which is on another place in memory.
If you want a language that does deep copies of your lists use some that don't have reference types like C, C++ or Rust, most modern languages that have references (including C#, Kotlin, Java, Dart, JS and Python) suffer from the same thing you called a "poor design" that in reality just makes sense if you think about it for a minute.
1
-
1
-
1
-
1
-
1
-
@mskiptr
1. This information about my devices is of absolutely no use when I am using a paid VPN with anonymous methods, they will only be able to sell general and not specific data about me (if they are going to sell it), so it doesn't make much sense what was stated here.
2. About 'd', you can judge this in many ways, as the data is encrypted (and depending on the encryption method used) they (ISP) may get inaccurate information about my data usage, still I will concede this point as the variability will depend on several factors.
3. Routers cannot know from 'b' to the point where we are on the VPN servers, because the network traffic is encrypted, they will only know that I am directing my data to VPN servers (and in some VPNs some relay servers are partially community and distributed, which makes it difficult to use this information for something useful). After the VPN the routers will know about 'b', but this information will be practically useless for them.
Point (1) defeats your assertion that VPN's are honeypots, if my identity is anonymous (and there are VPN services capable of being used basically completely anonymously) then there is no way for them to learn anything useful about my behavior, all they will know is my origin IP (and they won't be able to know if it's my home IP, or if I use some other service before theirs, etc). (2) and (3) are bonuses.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@anon_y_mousse
This doesn't make any sense, no one is saying you are inferior if you don't use Rust, we're saying that not using Rust lowers the overall safety margin of software and that's a bad thing. Most people who don't like Rust are C++ programmers who simply believe that their language is divine and that you just need to be a good programmer so that mistakes don't happen (doesn't that remind you at all of the elitism you talk about? ).
Rust doesn't diminish any knowledge either, it lessens the risks of making unsuspecting mistakes, you can still make conscious mistakes if you want, the difference between Rust and low-level languages like C/C++ is that most of the time you need to be conscious to writing bad code, and that's something that increases security effectively, because humans really do make a lot of mistakes. If this is elitism then using safety belts would be, using magnetic screwdrivers would be, wearing gloves when handling electricity would be, using switches instead of simply plugging in and unplugging wires would be, etc.
1
-
@anon_y_mousse
I honestly don't know which Rust programmers you've been talking to, could you show me an example of this toxic behavior? Because I can fully, right here and now, take you 10 recent discussions I had with incredibly toxic people in the C++ community, demeaning Rust programmers (in fact, I've seen many trying to attribute characteristics to Rust programmers, such as being "estrogenated" and as if the requirement to be a rust programmer is "lack of balls", does that sound like healthy behavior to you?). I don't think you understand what programming really is, because you have the false belief that something like "being a better programmer" has to do with being a "bulletproof" programmer, that's a childish thought, and a experienced programmer would quickly tell you how destructive this is. Programmers simply make mistakes, whether they are beginners (who naturally make more mistakes) or veterans (who will make fewer mistakes), humans are human, they make mistakes, and if they weren't unintentional things they wouldn't be called mistakes but bad actions . A security bug caused by a memory management problem could ruin the lives of thousands of people dependent on this critical system that is Linux, which is precisely why we need to improve not only our programmers but our tools, make things more difficult making mistakes does not mean failing to teach the principles behind mistakes. The seat belt analogy is excellent for exemplifying this, even the most veteran and capable drivers are at constant risk of having an accident, whether caused by themselves or others, which is precisely why using artifacts such as seat belts decreases the margin of human damage and is considered a positive thing, that's why we also endorse the use of safer brakes like ABS, even though the old brakes are perfectly functional, this is not to make people "less efficient drivers" , but to make them less susceptible to unexpected human error. Is it so, so difficult to conceive of this concept as a concrete thing?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@TheManinBlack9054
Was it?
I searched for it and this was what I got in the sources:
["phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook quality" data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens)"]
and
["The language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts."]
["Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value)."]
["We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered publicly available web data and synthetic data."]
None of the phi models are trained on "purely synthetic data", only mixed training data, so we don't know to which degree they are really being affected. We also don't know if the degradation is sufficiently high in the synthetic data from the early foundational models we have today for it to make a big difference versus what it will be in the next foundational models in the future and/or upgrades on them with more and more recursively fed synthetic data being part of their training.
I'm also not sure of any foundational models trained on purely synthetic data that is publicly available for checking, if you have any I would be interested in seeing them.
1
-
1
-
1
-
1
-
1
-
@yyny0
(my message apparently got swizzled, so sending it again)
This is true, yes, but arguably the overhead of doing these checks is minimal if you are constraining yourself to when they actually need to happen. And while it is true that you cannot circumvent this problem when having errors as values (unless you are willing to deal with pointer manipulation, in which case you can pretty much skip the check, but nobody does this), you can also not avoid this if you are trying to make a locality enforced error handling with exceptions (as you will need to have the try-catch everywhere in your code, and you will pay the cost of having the extra code in your function for dealing with the exception just like in the eav case).
It is also arguable if the cost of checking the stack frames that <do> need the cleanup would not outweight the benefits in performance you are claiming it has, but I did not measured this to say, maybe I'm not considering even more factors here like the non-locality of code causing a problem with the predictors in the CPU (or the whole context switching problem that is a pretty massive hit for CPUs, althought I cannot say the cost ammortizes when you don't have many exceptions).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Immediate mode UIs are generally extremely simple in terms of layouting and style, this is the part of the picture you are missing. Rendering the DOM is slow because:
(1) We use JS to do it
(2) It is retained mode, so all the changes are cached and rendered the minimum as possible
(3) The layouting and styling system of modern HTML + CSS is [extremely complex], it allows you to do a huge amount of positioning, calculations, transformations, event based styles, very compound and complicated responsive layouts, animations, etc... all of those things take time, and they are pretty slow to work
It is a different beast altogether.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@d3stinYwOw
Freedom should even extend to selling asbestos, which doesn't mean that the consequences of its misuse shouldn't be legally penalized. I think even you could agree with me on that. If you're using asbestos in your products and people are unaware of this and its health risks, you're clearly an aggressor and potentially a vile murderer, and it makes total sense for you to be stopped. Now if you sell asbestos making its risks clear, what exactly is the problem? I believe even you should be able to acknowledge that scientific research, for example to make the material safe, depends on the accessibility of this material in some case, and that restricting the use of something is restricting possible innovations that may arise from it and also restricting even the use in preventing misuse of this material (like for example encouraging biological organisms to find ways to solve diseases caused by asbestos). Accountability should be tied to real cases and not to society as a whole.
Finally, I'm not saying it's "black and white", I'm saying that this specific issue is a clear problem, which yes can potentially bring more safety in the long term but at the same time, can destroy or cause irreversible damage in the area of software development and free software. These are measures that are made by people who have no idea how the things we do work and yet still want to screw everyone over and use the excuse of "increasing our security". Just look at measures like the UK's "Online Safety Act" (and similar measures that have been proposed over the years in the US itself) to know that not all the security in the world is worth some things, even though it's not all "black and white".
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@happygofishing
No, it literally does. What you are even saying? This don't make any sense, are people crazy nowadays?
See, you, NOW, have the FREEDOM to pick a fork and stick in someone's else throat. You would obviously be violating the freedom of that person and his personal integrity, but YET, you have that freedom. Freedom implies freedom to not do something good as well as freedom to CHOOSE to do something good (in fact, if you don't have a choice there's nothing "deserving" or good in the action, you must first be free to make virtuous actions).
What those licenses mean is that you are ABLE to use the code in your own ways, this don't imply you WOULD, it also don't OBLIGATE you to do it. In fact, refraining from do so is MUCH more virtuous than just using copyleft licenses where you are obligated to it all together.
1
-
@z411_cl
This is not a good definition or conceptualization of freedom. If you take the freedom of people to do things, no matter how much time passes, it will still be less free than the alternative of [not doing it], the fact that you believe it is more free because it allows a specific property you want (in this case, the prohibition of releasing software without the source) don't make it less problematic over time.
I also don't think MIT, Apache or BSD are the pinacle of freedom, just so you know, I think a more free license would block things like suing people and allowing for all people to reverse engineer your code and even sell modified versions it (without using your name, obviously), it would be something that restricts only the truly evil things that most people can't fight against, like using the power of state to crush people down, but I don't think if a license like this is possible, I'm not a lawyer.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@cluddlesclaimscraniums6841
> I don't disagree that governments certainly are not on the moral high ground but relative to companies they are.
Again, I really don't think companies killed 100M people sending them to wars. Literally there was nothing human as destructive in the last 1 thousand years as the modern state for humanity (I'm being hyperbolic, but it is probably true as well), and it is still causing murder and destruction worldwide.
Also I like the argument of "those are not the same thing" because it was "other government", but this can be said the same for all companies that you are generalizing, the difference is that the state still has the same level of destruction power (as we know for all the nukes and military) and the same kind of psychopats in control as before, they are just more chill nowadays. The most destructive company cannot do more harm than the most willingfully destructive state, because, you know, some nukes and literally the world is doomed. And finally, while companies need money people give to them to survive (literally), the state survives through armed and indirect robbery in mass scale (which we call 'taxes' and 'inflation', in order) and don't need that kind of support, so the destructive power is also unbounded by the normal market limitations, they have """unlimited""" money supply to do warfare in the worst case scenario.
1
-
@KarimTemple
No, I never said "municipal governments", I think governments can have a wide range of possibilities in what they are, not only what you understand as government (like 'federal' or 'state' governments). A government can be as tiny as a neighborhood, or even a house, it depends on the organization of the society around it, I am a defensor of organic societies over state based ones. But regarding your last question, yes, I think municipalities are better at understanding business, because generally speaking the larger is your scope the harder to predict the whole consequences of your actions in the short, medium and long term, so a governance happening at a smaller scale will be more controlled even if it go wrong.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@anon_y_mousse
So honey, that's it. If you weren't convinced by these extremely obvious things I said, what would convince you? Absolutely nothing I could say in a youtube comment. Your mind is plastered and full of resentment, and that's just it, see how you yourself assumed that people "would have no idea what they do" when all I said is that a better tool provides a better quality in your service , you are only defending your position ideologically.
I can literally think of an ASM programmer advocating that C would be "a problem and that compiled code is not always predictive, that you need to master hardware programming at the lowest possible level and that if so you will never make mistakes" and that should not adopt it, just as I can think of a doctor refusing to use new safer, less brittle syringes because "a good doctor would never let the needle break in a patient's veins", or some pharmacist saying that the new methods of mixtures are for "inexperienced pharmacists, and that if you are an excellent pharmacist you will never need these new methods". At the end of the day it's just that, an irrational resistance to change. Thank God enough people didn't think like you and actively found new, better and safer ways of doing things, because if we depended on people like you humanity would still be hunting with stone tools, it's that simple.
1
-
1
-
1
-
1
-
@tiranito2834
I think you are not privileged on intellect, so I'll explain it in an easy way, ok?
CVE-rs is just a <project name>, it does not expose a CVE in Rust. It exposes the <possibility> of having CVE's in safe rust due to a bug in the compiler implementation (a problem that can be caught by using MIRI for example). I'm saying that this is a very especific and extreme corner case that will probably not pop as a thing in the extreme majority of codebases <because it is so specific>. So if it happens it would be because of bad actors wanting to intentionally introducing this in projects --- maybe from crates. The problem is that this is not a thing can will spread <normally> in any codebases, because it is really a edge thing, thus it will not be a giant problem unless it is being used by these bad actors. But if the problem is the existence of malicious code in crates then you cannot do nothing about it even with a perfectly and entirely proven type system simply because a crate can always do a system call or use unsafe or whatever and steal your system data --- in other words it is a very specific problem and it does not imposes a potentially <larger> risk than anything out there would already do.
I'm also not saying <this is not a problem>, I will repeat it again since you was not able to see it:
["Of course I'm not saying it is not bad, it is pretty bad, but still, edge cases does not removes the benefits on security of the language."]
The Rust team is aware of this problem and <there is a way to fix it> --- and there will be a patch once that fix is ready, so nobody is just saying "oh it is Rust so is fine", people are just saying "it is such an edge and corner case that it would not matter for the extreme majority of codebases" --- but, and there is always a but, it does impose problems we need to deal because it taints the type system correctness to some extent in specific cases if used badly or unintentionally.
Again, it is not really a CVE per se, and nobody is saying it is not a problem. I'm saying this comment is wrong because this does not compromise per se the safety of the language in the extreme majority of cases.
1
-
@tiranito2834
Amazingly, youtube loves to delete all of my comments.
I will try again here as I copied it before sending:
I'll explain it in an easy way, ok?
CVE-rs is just a <project name>, it does not expose a CVE in Rust. It exposes the <possibility> of having CVE's in safe rust due to a bug in the compiler implementation (a problem that can be caught by using MIRI for example). I'm saying that this is a very especific and extreme corner case that will probably not pop as a thing in the extreme majority of codebases <because it is so specific>. So if it happens it would be because of bad actors wanting to intentionally introducing this in projects --- maybe from crates. The problem is that this is not a thing can will spread <normally> in any codebases, because it is really a edge thing, thus it will not be a giant problem unless it is being used by these bad actors. But if the problem is the existence of malicious code in crates then you cannot do nothing about it even with a perfectly and entirely proven type system simply because a crate can always do a system call or use unsafe or whatever and steal your system data --- in other words it is a very specific problem and it does not imposes a potentially <larger> risk than anything out there would already do.
I'm also not saying <this is not a problem>, I will repeat it again since you was not able to see it:
["Of course I'm not saying it is not bad, it is pretty bad, but still, edge cases does not removes the benefits on security of the language."]
The Rust team is aware of this problem and <there is a way to fix it> --- and there will be a patch once that fix is ready, so nobody is just saying "oh it is Rust so is fine", people are just saying "it is such an edge and corner case that it would not matter for the extreme majority of codebases" --- but, and there is always a but, it does impose problems we need to deal because it taints the type system correctness to some extent in specific cases if used badly or unintentionally.
Again, it is not really a CVE per se, and nobody is saying it is not a problem. I'm saying this comment is wrong because this does not compromise overhal safety of the language in the extreme majority of cases.
1
-
1
-
1
-
1
-
@RAFMnBgaming
? What the actual fuck?
You know banks do robbery TEN THOUSAND times more than that event ever had any chance, do you? In fact, in my country, the "good and safe centralized currency" was literally stolen by the president Collor one time, the savings for life of millions of people just disappeared because of that action, do you really think centralizing things is the solution? Tell me, how a bug, literally a bug, in a currency proves that the whole technollogy is a flaw and inferior to centralized ones where in this world centralized ones was responsible for the most generalized robbery direct and indirect through inflation? Do you really, really, think that is reasonable?
See Bitcoin, see Monero, even Nano, here are the coins that shows how descentralized currency is waaaay better than centralized when the question is one, the most important one for a currency, scarcity, censorship and reliability.
1
-
@RAFMnBgaming
@Several Fighters
? What the actual fuck?
You know banks do robbery TEN THOUSAND times more than that event ever had any chance, do you? In fact, in my country, the "good and safe centralized currency" was literally stolen by the president Collor one time, the savings for life of millions of people just disappeared because of that action, do you really think centralizing things is the solution? Tell me, how a bug, literally a bug, in a currency proves that the whole technollogy is a flaw and inferior to centralized ones where in this world centralized ones was responsible for the most generalized robbery direct and indirect through inflation? Do you really, really, think that is reasonable?
See Bitcoin, see Monero, even Nano, here are the coins that shows how descentralized currency is waaaay better than centralized when the question is one, the most important one for a currency, scarcity, censorship and reliability.
It is actually the centralized coins that do the mass robbery thing, I think you're targeting wrong the things.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@vouagiaz
["Anyway, I think we know what right-wing means today."]
No, I don't think so. For example in my country you may be called right-wing or left-wing depending on who is elected to presidence if you are a libertarian. For example, when the right was in power we criticized it, then what is perceived as the right called us the left and hinted to the things that we have in common with them, and when the left is in power we criticize it and those same people call us rightists, while leftists call us rightists. It all boils down to who you appease. Obviously I can't say this for USA, I don't live there, but it looks to me that those things are indeed similar, depending to who you ask you may be left or right.
As for your original intent, I don't know, I think agorism (the core philosophical and political ideas Ross and I defend) is considered to be left-wing generally, so Jeff posting about him can make me see him as somewhat inclined to the left (plus the usual criticism he does about corporations for example). But maybe you saw something more, something I did not saw, so please explain to me that "right-wing vibes" you saw so I can comprehend that better.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
It's an interesting question, really.
In one situation, a person is placed in an incredibly green and beautiful place, he must cross this beautiful lawn to reach a specified destination, then he will earn a reward. She has the possibility to get there using an alternative path, with much more dangers and challenges. They make it clear that going the standard path will take less damage than going the alternate path, and that the reward will be the same even if it takes a little longer.
In another situation, they place a person in a seemingly solid place, but full of hidden dangers, there is no warning or fanfare about the situation that person finds himself in, there is only the fact that he needs to arrive at his destination to receive his reward. She may struggle and learn to dodge danger, get into a "safe context," but doing so will slow her down.
Which of the two people seems more likely to have fewer injuries at the end of the course?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@michellebyrom6551
I don't like the way you are excluding elements from the problem, this is a vice that many people that defend hunters-gatherers as a better model of society incur in.
You cannot remove the "we have the healthcare available to us" from the "we work 40 hours per week", you cannot remove elements from a set without it losing it's identity.
Hunter gatherers worked less, this is true. They also had less populations and their groups were far from each-other as you mentioned, this is also true. But they worked less thus they had less technology and spare resources available for them, this is an implication. They also had less populations and were far from each-other because their child mortality was basically ~50%! And the child mortality was so high because they had no spare resources and no healthcare, because they did not had to work as much as we do. They could not spare resources to dedicate to their children and they could not take care of their diseaseds, they also could not afford to feed the impaired the same way as the normal working members of their societies. Of course they worked less, so this can apparently be ignored safely.
And about private property and profit, they are absolutely important for survival and the fact that you are dissociating them from our modern life with all the conveniences (life healthcare) is the same vice that leads you into thinking they are "abstracts of philosophy". Even a hunter-gatherer had property of his clothes, his primary weapons (when using them) and of his own body. They did not needed or could afford private property of many things because they didn't even had those things to begin with, because they did not worked as much as we do.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@AzerAnimations
To be fair, the thing with "not starting new projects in C++" is more a general recommendation than anything of the No Boilerplate itself, C++ is an unsafe language and caused a lot of problems in software development. It was the only really "good" all-in-one solution for a long time, but now we have a mature Rust that can handle almost anything in a safer way and with almost the same performance (or even more, in some cases).
But you need not to stop you passion project in C++, if C++ is the language you like to use then use it, if you are writting some critical program that will put computers under risk maybe you should consider rewritting it, but for your personal and passion project using C++ should be absolutely no problem at all, at the end of the day all the languages can do the same things, just in different ways and with different approaches and pros and cons, so don't be sad.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
[[Parents can't effectively protect their kids from all the "bad" stuff from the internet. That's just a fact of life.]]
They can protect them for the MOST part, and the rest is just unfortunate but also a fact of life, no government will solve this problem entirely as well.
Parents duty is to TEACH their children values, they should not need to control their kids entire lifes, they should teach them how to avoid this themselves and why it's BAD. If you can't do it, it's a failure from your side, or at least it's something out of your control, you cannot solve all the world's problems.
[[The government can (and will) provide an ID with a secure chip than can be scanned to extract only the information needed. Even better, that info can be a zero-knowledge proof that only says "you are >18 years old".]]
The government can (and will) control every single aspect of your ENTIRE life if he want to, he will adopt mass surveillance in the name of peace and will kill all the opposition if it needs to "in the name of democracy". The government WILL NOT do anything to gather LESS information, they will do EVERYTHING to make sure they are the ones getting all the information and companies get less, so you can be that they will make "zero-knowledge proofs, but pero no mucho".
[[The platform needs to comply with the law. No way around that.]]
Sure as hell, just like companies needed to comply with the law in the 1940 regime, but this does not make it a good thing.
[[The developers just have to use the zero-knowledge proof.
No 3rd party identity provider is needed.]]
"just have" is a very funny way to say that, if you knew the absurd difficulty which is providing a secure ZKP about personal identity without allowing pseudonyms, this will just not happen.
[[No one wants that, not even the governments.]]
You can bet your ass that they sure as hell want this, for them.
[[I can 100% assure you that when the EU will implement age verification next year, they will use a privacy-first system, probably a double-anonymity system with ZKP.]]
I can also assure you that when the next dictatorship forms in the EU, it will be even worse hell for the citizens of it than all the previous ones combined, because no one will know until it's too late.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@yyny0
> [ Crates like anyhow? Most libraries do not use that crate, and in fact, the recommendation from anyhow author is to NOT use them for library code. This means that the errors returned from those libraries do NOT have stacktraces, and NO way to recover them. ]
Oh, now I get it. You want to have <libraries stack trace>, not <stacktrace in general>. Well, first: It is indeed possible to get the stacktrace of a program from errors as values, so your statement was wrong. You can move the discussion to the fact that this is more of a problem with libraries, and I would agree, it is a pain in this specific case, but you <cannot say> it <can't be done>, it can. Now, you said this is a problem for <errors as values>, but this is more of a problem for <Rust>. A language crafted around errors as values that have stacktraces opt-in at the consumer level of libraries would literally solve this problem and then your point would be much less interesting, so it is kinda fragile to criticize errors as values in this regard based on the design decisions of Rust.
> [We've had several "fun" multi-hour debug sessions because of that.]
So you are using Rust? What, you just said your service has many exceptions and all in the other comment, or are you talking about a different project or something like that? Also, can you explain me exactly how the lack of stacktraces in specific libraries costed you "several 'fun' multi-hour debug sessions"? I literally never experienced a "multi-hour debug session" because of something like that, and I work on a heck lot of projects, so a more concrete explanation will be good for the mutual understanding part of this discussion.
> [Also, those crates are opt-in, and even some of our own code does not use anyhow, because it makes error handling an absolute pain compared to a plain `enum`s.]
It makes? What??? It literally was made <to make using errors less painful>, given the purpose of this library, why do you find it <an absolute pain> compared to plain enums? You also don't need to use anyhow, see, you can literally easily capture stacktraces in your application by using std::backtrace::capture or force_capture, in the Rust page they even say it is a pretty easy way of gathering the causal chain that lead to the errors, you literally need to implement like 30 lines of code and use it in your whole application if you want to.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@yyny0
> [An error is incorrect by definition, it is NOT a valid representation of your system, in fact, the whole point of returning an error value is to describe invalid program state.]
What I said: ["a perfectly valid representation of a specific part of a system in a given time"], an error IS, in fact, a perfectly <valid> (in a formal sense, and sometimes in business rules) <representation> of a <specific part> of a system in a given time. When you do a login and you type the password wrong, the InvalidPassword <IS>, in fact, a perfectly reasonable and valid <representation> of state in your system (as it should be, it should not panic your software, just exhibit a simple message to the user so he can type the correct password). When you call a method to write to a file, and the file do not exist, receiving a monad where the error is represented as a possibility <IS>, indeed, a perfectly valid way of representing your program state. I just don't know why are you saying this. An error could be defined as a "undesired state given a specific goal intended in a closed system", but not necessarily as an "invalid state" if it is conceived as to be a part of the representation of your possible states. dot.
Proceeding.
> [As for statistics: our production product has ~20k `throw`s across all our (vendored) dependencies (of which ~2k in our own code), and only 130 places where we catch them. Most of those places also immediately retry or restart the entire task.(...)Additionally, 99.9998% of those tasks in the last hour did not return an error, so even if the cost of throwing a single error was 1000x the cost of running an entire task (which it is not), it would still be irrelevant.]
You said "it should never happen", and then you showed me your personal data and said it happens 20k times, I would say this is a damn high number to say it "should never happen". Also, you did not give me any timeframe, not a comparison between different services, giving me one personal example is just anedoctal example. How exactly I'm supposed to work with this and say there are not very statistically representative systems that <do> have an impact with throwing exceptions for everything? This is a pretty specific response.
> [I would consider that "grinding to a halt".]
You consider restarting a program immediately after an error the same as "grinding to a halt"?
1
-
1
-
1
-
1
-
1
-
1
-
@yellingintothewind
> [Unless you are going to go full "no true scotsman", be careful making claims like "no libertarian", unless in regard to the NAP itself.]
This is not a kind of "no true scotsman", I'm taking a principled instance on what it means to be a libertarian. Saying that "no libertarian would become a tyrant" for example is simply saying "if a libertarian become a tyrant, is he still a libertarian?" (for obvious reasons, not, because being a libertarian IS agreeing to certain moral and ethical principles and acting according to them, you are not a libertarian by merely declaring yourself as one).
That's why, for example, Walter Block himself was declassified as a libertarian by Hoppe (part of his article: "If this collectivistic nonsense is not enough to disqualify Block as a libertarian, the following exhibit, demonstrating its monstrous consequences, should remove even the slightest remaining doubt that he is anything but a libertarian, a Rothbardian or a sweet and nice person.") because he was defending atrocities in regards with recent conflicts. Being a libertarian is not an identity, it is a burden that you need to carry appropriatedly.
In this sense I would rather like to qualify more my affirmation:
"If you think someone that is not related to your intellectual creations whatsoever has any duties to you in regards to the fact that your mind "produced" it (duties in the state sense or having responsibility to not disclose or not share and to have legal consequences because of it), merely by entering in contact with the thing and gathering those ideas in your own mind, then you are harming an innocent and thereby you are acting against the own principles you claimed to defend.", which in this sense I think reasonably justifies my initial broad affirmation.
> [Kinsella does hold copyright in his works, because there is, at present, no viable way for him to not hold that copyright]
I did not denied that, I said "saying he holds copyright is a bold statement" in the sense that for you to hold something you must also <enforce> this same thing, which I deny he would do (this resonates with his theory of property rights as well, as for him not enforcing your property and intentions over something implicates in not owning it -- naturally, this is a simplification). But as yourself stated, if there is "no viable way for him to not hold that copyright" then I don't understand what is the irony on it thereof.
> [All that said, please go make sure you actually got the entire context of this thread, as YT has a tendency to screw with comments, and I believe you are confused about me. I bear no ill will toward Kinsella, and I agree essentially entirely with his assessment of the current system. It is only his abstract theory crafting where I do not agree, not that I strongly disagree either, I just don't find idle theory crafting a worthwhile use of time. Much as I think constantly answering "but who would build the roads" is generally a waste of time. The real answer in both cases is people would figure it out, in a context-specific way.]
Hmmm, I see. I think this comment of yours is also hidden, because I cannot see it when I click on reading all of them (maybe it's just a loading thing), but I got the impression you had at least something against him. Either way, I'm not also posing myself against you in a personal sense, this is more of because I found your comment being very strange. I also don't consider myself a particular follower of Kinsella ideas (even though I readed his writtings and agree with many things he says), I can also respect your position that people would figure things out in the real situation (although I don't find this a response to the formulations libertarians do for responding people, theory also impacts the practice and the "abstract theory crafting" -- as you named it -- can also have effects on reality, it is also a good way of verifying the consistency of the theories you defend).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@conorstewart2214
> [Why shouldn’t we take steps to help and protect people with disabilities to stop them from becoming totally outcast from society and left to fend for themselves through no fault of their own?]
This is more of my problem here. There is this seemingly impossible to fix confusion in peoples minds, caused by the modern notion of the state as being somewhat responsible for the common good and being the "representative of people", that if you don't prohibit something by the absolute force of the law then you are not "taking steps to prevent a problem". In reality, making a law is not solving any problem, laws can only punish people when they exhibit specific behaviors, they cannot teach them their behaviors are wrong, they cannot lead to a more moral society. I am not saying we should discriminate against this man, neither that the people that discriminate should go unpunished, but that creating a law to do this or using the force to impose this view on people is not what we should do to solve the problem.
> [Do you really want to live in a society where it is every man for themselves and no one has any compassion or empathy for anyone else?]
See? You related my "having a law for this is a problem" to "you want to live in a society without compassion or empathy", how am I supposed to answer you when your worldview is already stablished around the fact that if we don't try forcing people to be moral by arresting them or suing them we are being imoral ourselves? Forcing people to accept others is not compassion nor empathy, there is not a single fraction of morality when you do what you do because there is a law preventing you to do otherwise. This don't mean I'm saying "we should do nothing", but until this first part is sorted out everything I will say here will sound like an abomination to you.
1
-
1
-
1
-
1
-
1
-
Let me crush some of those points.
> then why are you doing a video about it?
Because it is interesting.
> One of the most effective way to improve longevity in many animals is calorie restriction and exercise. Poor people often have both, although not voluntarily.
Wild animals also <die younger> than domestic ones, so the "caloric restriction and exercise" cannot beat <basic healhcare>, security and access to resources. If you are saying that poor people have the first because their lives are harsh I don't see any problem in also stating the second, their longevity should be smaller.
> The population least likely to have their proper birth records are poor people. Ergo, they are like to just plug in 1st or 15th of the month they believe they were born in. Also, the older a person is, the older their birth record is, and the more likely they were born outside of a hospital. Older record keeping was much less rigorous than modern records. My Father-in-law had two different dates for birth records. One for the 1st, one for the 3rd of the month. Same month, same year. So his "pension fraud" would only gain him two days of payout.
This is very off put, I don't even think it clasify as a counter argument. One of the points she said is also that those registries are guessed, so nothing strange here. I also have some familiars that are 3-5 years younger than they are registered.
> but to then submit a paper for publishing without validating his hypothesis is poor science. Sabine citing a non-reviewed paper is bad science communication. To me this is an example of confirmation bias. Sabine (who often seems suspicious of government spending), sees a paper suggesting that spending programs are filled with fraud, then broadcasts its message without much scrutiny.
She said <YET> not peer validated, which means she expects it to be. Not being peer reviewed also don't imply falsehood. I can understand this is not the degree of quality you epxect in science communication, but you can't judge her video as "confirmation bias" just because it don't adequate to what you consider the ideal. The paper also is not based on spending programs being filled with fraud (and most people know they do, you don't need a random not peer reviewed paper to back it up).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Tachi107
Don't worry, I meant it sounded aggressive, in the sense that it sounded a lot more provocative than it should have, but I can understand what you mean. Well, I personally think that Torvalds, being the creator of Linux and the one who names the kernel itself, has had enough time to consider all these issues, and frankly, I imagine this is going to be more of a test than an absolute implementation, then we will be able to see in practice whether or not these language problems will really prove to be insoluble problems, or if they will just be considerations to improve it over time, I believe it is insufficient to drop an entire language that has so much power, like this (there is a lot of more rust than borrow-checker, safe concurrency is one of its big selling points too, and that's quite interesting for a kernel). I think in terms of standardization this isn't a big problem either, I mean a good part of the reason why C/C++ are such "static" languages is because they aren't willing to change quickly to solve their problems, the rust developers have been very helpful over the years and the language has been making great strides (from the article itself, it seems to me that the current barrier to solving that question is much more a proper formalization of the problem and the solution than a unavailability of wanting to improve, which is extremely positive in this case). And as for low-level stuff needing to be implemented, that might be an unpopular opinion, but I also don't think that's a big issue, Rust is known for its stability and it seems to me that if it's possible to solve problems using this, it's probably a good solution (mainly in view of the control and fine-tuning that this brings).
Don't get me wrong either, I've been a C# programmer much longer than I've been a rust programmer, and I also have my taste in C (an absolutely beautiful language indeed), and my dislikes with C++, I just try to see how many of the criticisms made of language are effectively criticisms of impediment and how many are, how to say, just a very immediate reaction. In the end, I'm hoping that the addition of the language will make the kernel more powerful and not worse at all (considering all that people have been saying about Rust, such as the recent statements by the Microsoft guy, and the fact that Torvalds himself has said that the language won't do any harm to the kernel, I believe it's a perfectly valid belief, these people wouldn't say that without reason).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@swordcreeper7754
I wouldn't say that Rust takes away the programmer's freedom, any more than a magnetic screwdriver would take away its users' freedom, or a car with crash sensors, I think that's more a way of ensuring that errors don't occur than a freedom removal per se (besides that you can evade most of the rust rules, you just need to use insecure code). I would say that in a way, we need to restrict ourselves whenever we are going to write secure and readable code, this is true for Rust but also for C, C++ or any low-level language, if you turn on static analysis in languages like C or C++ (or if you focus on just using smart pointers, they are the recommended patterns to write memory management) you would also be "restricting your freedom" in some sense, I would say it's just the principle of languages that differs this feeling, in the sense that one brings it by default, and the others you need to turn it on , both are a choice.
1
-
@diobrando2160
No, this does not "defeat the point of using Rust", insecure code exists in all languages, the difference lies in the intensity of its use and the ability to efficiently encapsulate insecure code in secure code. This is a beginner's argument and it's pretty bad.
And yes, C, C++ and Rust are low-level languages, they just aren't low-level languages compared to ASM and even lower-level languages, however clearly they are low-level compared to more abstract languages like C#, JavaScript, Python , etc...
1
-
1
-
4:38
I've literally been using it for like 5 years now, and I must disagree.
Firstly, because there is literally no reason for it to be "laggy" and I never experienced lag with it (I think?, not anything I did not experienced with chrome based browsers in general before)
It has a bunch of cool ideas, it has it's own optional sidebar and customized reading lists (I don't care that much about it, but it is cool to have)
It feels very fast for browsing, and it literally removes most of the ads from websites for you, no need for third party extensions for that, it also filters cookies and has a bunch of anonymous features that are very cool (even a Tor browsing mode, which is not ideal for total privacy but good if you want to test some things), it also passes most privacy tests for relatively good privacy browsers.
I can't say nothing about the hotkey customization, I never felt the need to do this in a browser.
Web3 bullshit is just your own ideological opinion on the subject, so why this is even listed as a con? It does not get in your way, you can literally use the browser normally without ever needing to touch the web3 stuff, I almost never use it as well and could not care less about that
Other than that, it is a normal browser, so I don't see the reason why someone that uses chrome would not like that browser as well.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1