General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Lepi Doptera
Sabine Hossenfelder
comments
Comments by "Lepi Doptera" (@lepidoptera9337) on "Are Hallucinations Popping the AI Bubble?" video.
A CAD system is a productivity tool. Smart people use it if they can have it. I remember the day I introduced my significantly older boss at one of my workplaces to analog circuit simulation. He was skeptical at first. Two hours later he came running to me with a grin as wide as the Pacific ocean. He thanked me for having talked him into it and from that day on he did not design a single circuit without that newly discovered tool. He was a very smart boss, though. I am pretty sure that he became way better at utilizing the software than I ever was. ;-)
2
Yes, but apologizing is not enough unless learning can take place. None of the current AIs are capable of learning from their mistakes.
1
No, the majority of data input is not wrong, but when it is wrong, it is usually spectacularly wrong because then we are dealing with data that has been curated by humans who themselves can not admit and correct mistakes. That is not the actual problem, though. The problem is that LLMs separate learning from transforming. It is not possible to teach an LLM to "forget" wrong responses and to correct its learned data set.
1
That was a good hallucination.
1
Are you getting help with that? ;-)
1
Not the worst idea. For one thing evolution has this motion thing figured out really well, but more importantly humans will accept machines that move like humans a lot easier than they will accept machines that move like machines.
1
Logic is actually based on physics. It's the behavior of finite collections of objects in baskets. So, yes, as long as we are talking about objects and immutable properties of objects, logic is a very fine tool to reflect reality. It does, however, fail quite spectacularly once we are applying it to concepts that are not objects. The problem is that most humans don't know this, so they keep using logic in scenarios in which it is not the right tool. Having said that, LLMs are not using logical reasoning to begin with.
1
One of the key properties of intelligence is that it can learn on the spot. LLMs can not. They have a fixed training data set and once the training algorithm has finished, they operate based on a constant knowledge set. You can easily test this for yourself. Find a topic for which you know that the training data set of the LLM was faulty and then try talking it out of committing the same mistakes over and over, again. You will soon give up. It's a very frustrating experience to be faced with a machine that has ZERO learning ability. This will change in time but the solution won't be called LLM any longer.
1
No, not really. I would still share common emotional states with the person. He could, at a fundamental level, still understand me, even if he can't follow more complex logical thoughts of mine. The LLM does not understand anything. More importantly, a person with IQ 70 can still learn. An LLM can't. Once its training is over, it can't be changed. Try talking any of the current AIs out of responses that are based on incorrect training data. It doesn't work. It can not work because LLMs do not learn on the spot like humans do.
1
Humans contain at least three such systems. The right hemisphere hallucinates all the time and is usually barely held in check by the left hemisphere. Underneath all of that is a crocodile waiting to be set loose. ;-)
1
That is technically not wrong, but most parents would certainly not want to have a person who talks like that around their kids. ;-)
1