Comments by "miraculixx" (@miraculixxs) on "Internet of Bugs"
channel.
-
28
-
@michaelbarker6460 interesting approach using multiple models, thanks. I doubt though that it adresses the problem, fundamentally having multiple models agree just means its a majority decision. If 2 of 3 models think the answer is A, 1 model thinks the answer is B, and B is correct, the end result is A and it is still wrong.
Re. RAG, that also doesn't really fix hallucinations it just adds more complexity, that is hallucinations can now come from a subset of one if your documenta, or even a mix of many. No guarantees for correctness.
I think if the use case is information retrieval, i.e. find the most relevant documents and parts there in, well do that. No LLM needed (vector db / search engine will do), perhaps using templated responses.
4
-
1
-
1
-
1
-
1
-
1
-
1