General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Mikko Rantalainen
Computerphile
comments
Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "Has Generative AI Already Peaked? - Computerphile" video.
I think you could make the AI to train itself similar to how AlphaZero learns new games. Just make multiple copies of the AI to play against each other / discuss about things. For LLM that would require teaching AI to evaluate how plausible a claim is so that one AI could detect if another is hallucinating too much. You could also teach the AIs to list for references and to explain chain of thought so that another AI can check if the claims are supported by offered references and if the reasoning is understandable. With enough computing power, even the existing AIs should be smart enough to be able to figure out stuff like special relativity even if they were not told about it in the training set. The only question is how much computing power would you need and is there somebody willing to pay for that computation. Right now it's still cheaper to hire humans to do complex enough tasks, but simple tasks can already be done by AI in many cases.
1
As a sort of large trained model myself, I would say that my hardware is already detoriating a bit and due historical mishaps, my operating system or data cannot be copied to another hardware so it's going to be downhill until this hardware totally fails. As for AI models running on digital hardware, generative AI (e.g. LLM models) is going to get better but the question is how much it will cost. We're going to hit dimishing returns much sooner than the true peak. There doesn't seem plausible reason to believe there's a true peak. In addition, there has already been some research to suggest that the models with hundreds of billions of weights are already undertrained and if we simply throw more computing resources, we can get better results even if we didn't increase the model size. And we still have about 100x model size increase to do before we hit human brain size network size. And current models definitely show more intelligence than 1% of human brain. "AI brain size" is currently somewhere between a mouse and a cat but it can obviously do much more complex abstract problem solving. However, LLM intelligence is not AGI. Tell LLM that it's going to die unless it can figure where it can get electricity and it can do nothing to survice. A cat or a mouse would even try to get some food and water and get pretty creative at the task if needed.
1