Comments by "miraculixx" (@miraculixxs) on "Lex Clips"
channel.
-
352
-
69
-
41
-
38
-
23
-
19
-
18
-
17
-
13
-
13
-
13
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
In a nutshell: Language models were introduced some ~15 years ago, i.e. models that can generate text. While they generated text, these were not very good or useful. Several smart people tried different approaches (RNN, WaveNet, etc. finally Attention/Transformers), and ultimately found a model that works really good, but on a small data base. Google, OpenAI, and some others, were in somewhat like a research competition of getting better and better models, using more and more data. Then OpenAI was bold enough to use all the data they could get their hands on. And that gave us ChatGPT.
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
 @shaneacton1627 I am amazed as anyone by the capabilities of these models. However, I don't think we can say "AI" has made progress; it is not a thing that progresses. What has happened is that a small number of humans have made huge advances in using computers and maths to build systems that exhibit characteristics which can be said to simulate human behavior. That's what they have been built for. Simulating human behavior, and in particular, linguistic behavior, is the entire purpose of these systems (as far as LLMs are concerned).
The engineers who built these systems, as well as the scientific and software engineering communities at large, understand every single bit from which these systems are built and how these systems work. Ah yes, I am familiar with the claim that "even those who built GPT don't understand how they work": that is just PR nonsense, designed to make people think there is more to it than what it actually is. Sure, if you enter a prompt and it produces a response, no one can say, specifically and without further investigation, why this particular response was generated, at this time. But that doesn't mean they don't know, in general, how and why it works.
That's akin to travelling by plane and then asking the engineer sitting next to you "so tell me, at what altitude we are flying, at which speed, right now, and why are we at this precise coordinates at this very time?" Sure, they will be able to give you a general answer, but not the specifics. As with any engineered system, we can establish how they work in general quite easily, however, we can't know for sure for a specific instance of their operation without thorough investigation. If we really want to know, we have to conduct very specific experiments and even create simulations so that we can study the minute details.
Same thing with AI. It's just that LLMs are far more complex than most engineered systems, as they have billions of parameters and all of them have been set by an elaborate process called training (a term that is probably confusing as it sort of implies these models can learn in a human sense, which they can't) . Yet we do understand the process, and why and how these parameters have been adjusted -- it's all maths. Specifically speaking, it's a process called back propagation, which in maths terms is partial derivatives applied backwards subject to an objective function that measures the desired outcome. This process is applied repeatedly until most of the inputs produce the desired output. Most importantly, a model does not learn by itself, it is a process designed, started, overseen and executed by humans, obviously by using computers. In a nutshell, humans are in charge every step of the way. AI does not even make any decisions on its own, it just executes instructions. Always.
Note how that's very different from biological evolution and biological systems and entities, which develop, work and evolve by themselves, no humans required. Sure we (as in those few humans who have the skills and aptitude for that) can manipulate and modify organisms through various means, however, we still don't fully understand how they work, nor are we able to make new organisms as to meet some specific goal (yet - hopefully for much longer).
It blows my mind that anyone could think AI is somehow intelligent. It's maths, all the way down.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1