Comments by "" (@diadetediotedio6918) on "Theo - t3․gg"
channel.
-
83
-
72
-
4:36
No, really, this is really not that true. Even in assembly days people were already trying to get rid of it because it was just not that practical. It obviously sounded like something hard to grasp, like "how the 'compiler' will make better optimized assembly code than us", but the unpracticality of assembly was very clear from the beginings (people already used a LOT of macros at the time, and this makes writing assembly way less manual labor than writing it manually, so the jump is not that far).
It is completely different when you are selling a bs statistical machine that may or may not produce working code and, in the worst case, delete your whole database because people became so reliable on it they don't check things anymore (100% of the code writen, we know people take more time to read code than to write it, it is way harder to understand, the probability of hidden bugs is way higher as well, etc). The problematics of this are WAY different in this sense.
56
-
54
-
53
-
46
-
27
-
24
-
22
-
21
-
14
-
14
-
11
-
11
-
9
-
8
-
8
-
8
-
8
-
@bangunny
Nah, I think this is a misconception. You can use tailwind before knowing CSS, and use your tailwind knowledge to use CSS better, or it can be the other way around: with your CSS knowledge you can know how to use tailwind better.
In principle, this is possible because tailwind is very descriptive, it says to you what it is doing, "bg-xxx" is clearly about a "background color", "w-xxx" is about "width", "pb-xxx" is about "padding bottom", etc... you learn by the naming and by the experience when using the tokens. After that you will use CSS, and you will start discovering there are those same keywords but in an "extended form", so the knowledge is very transferable.
8
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
I think this is a misguided opinion. Software developers are engineers, but also architects and also bricklayers, we do this all at once because software is that complex. It is obviously different from the physical engineering, but also not that different from all kinds of engineerings. It is plainly possible, for example, to build a simple robot as a proof of concept without needing to carefully plan it before, you need to have the right intuitions. Curiously, it is also possible to build a house without planning, I live in a country where this is kinda common and they tend to last for decades and more, they are obviously not marvelous constructions but are not explosive (bridges are in another league, but even they sometimes are made this way).
Being a software engineer, you don't need to care about physics (unless we are talking about performance), so it is usually not extremely dangerous to make mistakes, it is also expected as it has dozens or hundreds of moving parts, where in a bridge you only need to make it stable, reliable and durable. Those are different professions, you should not collapse their definitions like that to compare and say "this is not engineering and this is".
6
-
5
-
@
What exactly is this question supposed to mean? I live in brazil, here most cheap computers have this amount of disk space. Maybe "most" is a bit hyperbolic, but the defaults are around that, up to 500GB sometimes, but those are for newly bought ones. People that live in small cities have old machines from sometimes more than 15+ years ago, which means like second generation processors and such, small amounts of RAM (until I got my last PC, ~7 years ago, I had like 2GB or 4GB of RAM max, and after that I used one with 6GB for some years, the disk space was not much more than 128GB, it was almost always full). It's not that uncommon here.
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
@cemreomerayna463
1. I became angry because of your attitude towards my response, not because you offended me directly, nor because a "stupid technology". I'm just generally tired of AI apologists and people like the other guy in this comment that talk bs without any kind of appropriate reflection. But either way, sorry if my tone was not ideal for a discussion, I'm fine and this is all past now.
2. And what I'm saying, what I literally said in my comment, is that this is a fundamentally, computationally untractable problem. You understand what are the implications of this? The implications are that [it is not getting more reliable], or better, that the [reliability gains are marginal]. For one, reliability implies a grounded conscious compromise with the truthness of a sentence, you say someone is reliable when that person has a good amount of proven knowledge, has the right intentions (to seek truth) and has peers confirming that veracity, those conditions are generally reasonable to expect when we define reliability. Now, AI does not have 2 of these, it does not have true knowledge in any sense, it literally just spits tokens, it is [literally] how it works, you can make an AI say almost literally anything with the correct prompt, this is far from being possible to humans and is obviously a terrible sign for this prospect. It does not "understand", it spits the most probable token, it can be steered towards more "reliable" responses by reinforcement learning and other techniques (like dataset filtering or "grounding" with things like RAG and similars), but it is still fundamentally just spitting tokens in a specific order, there's no knowledge and it fails to suffice the condition (1). For (2), AI obviously does not have conscience, it also does not have any kind of known morality, it can just immitate and spit, it is extremely easy to understand why they can't by implication also not "compromise with the truth" nor "tell the truth" by any means imaginable, they are really just intrinsecally not reliable and that's the point. For coding the implications are exactly the same, coding language is not a "regular grammar", I don't know from where did you got that impression, most if not all mainstream programming languages are literally context-free grammars with specific context-sensitive aspects, even while structured (because they need to be parsed efficiently), they are obviously far from being as complex as natural language, but nowhere as simple as a "regular grammar". It is also the case that coding is extremely complex in and on itself, and even the best, most advanced, "reasoning" models make arguably extremely silly mistakes that you would expect from a complete amateur (like literally creating ficticious packages out of thin air, writing disfunctional code that does dangerous things like deleting what is not supposed to delete, and literally just having a basic to terrible understanding of coding patterns and expected solutions), I've used all models since even GPT-2 (obviously, it was unable to code almost anything but extremely short one liners that were terribly wrong almost all the time) to GPT-3 (terrible at coding, but was starting to enhance) up to 3.5 (way better, still terrible), 4 (mid at best, still very terrible), 4o (almost the same as 4, but a bit more precise), o1 ("reasons", but still commits the same basic mistakes I saw in 4o over time) to o3-mini-x (which is not that much better than o1). Those models are not more "reliable", they are better at making less obvious mistakes (which is arguably more dangerous, not less, as now you need to understand the semantics of the thing to catch those), they are making less brute mistakes, they are still making absolute copious amounts of silly problematic errors. Their "reliability" is getting marginally better with each new innovation, so what I'm saying is here.
3. This is not only false, but also a dangerous way of thinking in and on itself. See (2) for reasons why the reliability of humans is inherently less problematic, and truer, and even more: humans take responsibility of their actions, they are moral agents in the world, while AI agents are, again, just spitting words. If a human make a terrible fatal mistake, he would be fired or even sent to jail, he would have nightmares with his mistakes, a bot making mistakes is just a sociopath, cannot be held accountable, cannot feel anything, it's unpredictability is [absolutely dangerous], while humans have absolutely developed ways to deal with their uncertainty (ways that work, we literally delivered the man to the moon with a software so small that would be uncomparable with a hello world compiled in many of the most modern languages). Your response is unsuficient, and problematic.
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2