General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Martinit0
Dwarkesh Patel
comments
Comments by "Martinit0" (@Martinit0) on "Dwarkesh Patel" channel.
Previous
1
Next
...
All
Because LLMs - once trained - don't extract rules from input data and do another step of applying those rules. That would be precisely the "synthesizing" step that Chollet talked about. LLMs just ingest the input and vomit out the most likely output. The human equivalent is a gut-feel reaction (what we call intuition) without attempt of reasoning.
6
Not for inference. Inference can be done on a single (sufficiently large) GPU. It's only the training of LLMs that requires massive server farms.
2
@falklumo In fact, I would say that most high-school math is about memorizing outright results (what is 2 x 3?) or at least solution recipes (solve linear equation). Only at university level - when you are asked to come up with proofs that you haven't seen before - are you actually challenged and need to apply intelligence. Example: When I ask you what is 2+2 you don't actually start to add 2 and 2 and compute the result. You did it so many times that you memorized the result of 4 and reply within < 1 second. But when you ask a first-grader they have to actually sit down and literally put 2 and 2 together - it requires significantly more mental effort and time. The rest of math in school is just memorizing increasingly more complex concepts or results. Also grading in school using tests that limit your available time are all about testing for memorization - there is often not enough time to apply intelligence to derive solutions.
1
@empirednw6624 Ok, but now Trump with his infinite tariff wall is removing a big reason why China would hold back on attacking TW. If China can't sell to the USA anyway might as well get that precious stone that USA can't operate without - then see how the negotiating position improves for China. There is even a certain urgency for China right now to act - USA has the pants down on manufacturing. And we aren't even so sure if USA would step in and defend Taiwan - once factories are damaged and knowledge has dispersed, what is there to defend? That makes the tariff war a very dangerous tactic.
1
Yes, that may be but who came up with the methods that you teach students? There has to be a point where someone was the first person to come up with the method. That is what we are looking for with "AGI".
1
It's likely due to propaganda driven by Russia. They have the disinformation play down very well. We see it in funding flows to right-wing parties in Europe - it's coming from Russia.
1
Please give an example of "LLMs that take user input that is inherently stochastic and it can come up with novel outputs"
1
42:45 Ham meme: https://www.youtube.com/watch?v=cYxIe7T8ZKA
1
Previous
1
Next
...
All