Comments by "Vitaly L" (@vitalyl1327) on "I Want LLM Agents To Be Good" video.
-
2
-
1
-
@mortiz20101 if you failed to comprehend what I'm talking about, you probably should not use LLMs.
1) Feedback loop: feed the result of testing the LLM output back into LLM, with all the relevant results (syntax errors, test failures, static code analysis output, etc.)
2) Critic: every time an LLM is producing an output, do it a few times with the same prompt, and then use another LLM prompt to criticise all the outputs and select the best one out of them.
3) Code sandbox: give LLM a tool to run arbitrary code in a safe sandbox. Use inference harnessing to ensure the tool is used immediately as the call appears in the output.
4) SMT, Prolog, etc. - LLMs cannot reason, obviously. But they can translate an informal problem into a formal language. Which can then be processed by an SMT solver, a Prolog interpreter, or whatever else you use as a reasoning tool.
You have a lot to learn. Do it. Or stay ignorant.
1
-
1
-
1