Comments by "Vitaly L" (@vitalyl1327) on "3 Key Version Control Mistakes (HUGE STEP BACKWARDS)" video.
-
1
-
1
-
@vincentvogelaar6015 apparently you do not understand how to use LLMs. They're not any different from our own minds - we cannot reason either, unless we use tools, such as formal logic. So give LLMs their tools - give them an ability to write down reasoning step by step, to verify the reasoning using formal methods (as in - make them write down the steps as HoL proofs or Prolog predicates). Give them a sandbox to debug the proofs, just like you do with any other code. Provide a critical loop to make sure they did not miss anything from the formulation of the problem when translating it to a proof.
I'm using LLMs for solving engineering problems, and reasoning is a crucial part of it. Even very small models (like Phi-3) are perfectly capable of reasoning on a level beyond the capacity of an average engineer, when given the right tools and proper sandbox to test the ideas in (akin to our imagination).
Also, LLMs perform the best when reasoning about things that were not in the training set. E.g., they write much better code in languages they've never seen - because they're forced to do it slowly, verifying every step, instead of churning out answers instinctively.
1
-
1
-
1
-
1