Comments by "miraculixx" (@miraculixxs) on "Lex Clips"
channel.
-
352
-
69
-
41
-
38
-
23
-
19
-
18
-
17
-
13
-
13
-
13
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
In a nutshell: Language models were introduced some ~15 years ago, i.e. models that can generate text. While they generated text, these were not very good or useful. Several smart people tried different approaches (RNN, WaveNet, etc. finally Attention/Transformers), and ultimately found a model that works really good, but on a small data base. Google, OpenAI, and some others, were in somewhat like a research competition of getting better and better models, using more and more data. Then OpenAI was bold enough to use all the data they could get their hands on. And that gave us ChatGPT.
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3