Comments by "" (@grokitall) on "Lex Fridman" channel.

  1. 1
  2. 1
  3. 1
  4. 1
  5. 1
  6. 1
  7. yes your typical symbolic ai will have a fragile system, because it is only using shallow knowledge in the rules. as pointed out in this interview, statistical systems like chat gpt have an even bigger problem with this issue, as they only know what statistically plausible, with nothing constraining that, and as also pointed out in the interview, when looking at the statistical data in the hospital example, most of it was coincidence, and thus noise, which is why such systems produce hallucinations so often. the shallow knowledge problem was the reason that the cyc project was set up, as lenat kept encountering problems where after the initial sucess, you needed to be able to drill down to the deeper underlying reasons for something to make further progress, and that extra, deeper level knowledge was not just lying about in order to be able to just dump it into the system so you could make more progress, so he decided to start collecting it. current ai, especially black box statistical ai, excels in areas where good enough most of the time is beneficial, and total garbage the rest of the time does not really matter. for literally every other type of ai problem you need layer on layer of feedback telling the lower levels that the answer it contributed was wrong, and preferably what the right answer was so that it can get it right next time. this requires white box symbolic ai, as do various legal issues like copilot being an automated copyright infringement machine, or the issues of who is legally liable when the ai kills someone.
    1
  8. 1
  9. 1
  10.  @mandy2tomtube  true, life started out with no language, and no models of the environment, and really rubbish decision making. which is all irrelevant. black box ai has a number of fatal flaws in the basic design, which fundamentally cap the level to which it can go, and the rolls where it can be applied. this is due to the facts that it has no model of the problem space it is working on, and thus gets minimal feedback, and the fact that for man rated systems, you need to be able to ask not just if it got it wrong, but how it got it wrong, so you can determine how to fix it, and apply the patch. at the moment we cannot know how, we can only wrap the system in a conventional program, spot examples it has got wrong in the past, and return the right answer. unfortunately this does not stop it getting nearly identical cases wrong. you also have no method with which to fix it, which is especially important as the latest research has found the majority of the models to be full of security holes. the only way to resolve that is to stop using statistical ai as anything but a learning accelerator, and move to white box symbolic ai instead, which is what cyc does. we don't limit the options for flight to man powered flight, nor transport in general to how fast your horse can run, so how we got here does not matter much, it is how we get from just before here to just after here that matters, and statistical ai is just not up to the job. for anything else, you need models, which are either mathematical, or expressed in language.
    1
  11. 1
  12. 1
  13. 1