Comments by "" (@grokitall) on "Is the Intelligence-Explosion Near? A Reality Check." video.

  1. 2
  2. the data and power scaling issues are a real feature of the large language statistical ai models which are currently hallucinating very well to give us better bad guesses at things. unfortunately for the guy who wrote the paper, sabine is right, and the current best models have only gotten better by scaling by orders of magnitude. that is fundamentally limited, and his idea of using a perpetual motion system of robots created from resources mined by robots using the improved ai from these end product robots can't fix it. to get around this you need symbolic ai like expert systems, where the rules are known, and tie back to the specific training data that generated them. then you need every new level,of output to work by generating new data, with emphasis on how to recognise garbage and feed it back to improve the models. you just can't do that with statistical ai, as its models are not about being correct, only plausible, and only work in fields where it does not matter that you cannot tell which 20%+ of the output is garbage. the cyc project started generating the rules needed to read the internet and have common sense about 40 years ago, after about a decade, they realised their size estimates for the rule set were off by 3 or 4 orders of magnitude. 30 years after that, and it has finally got to the point where it can finally read all the information that isn't on the page to understand the text, and still it needs 10s of humans working to clarify what it does not understand about specific fields of knowledge., it then needs 10s more figuring out how to go from getting the right answer, to getting it fast enough to be useful. to get to agi or ultra intelligent machines, we need multiple breakthroughs to get their. trying to predict the timings of breakthroughs has always been a fools game, and there are only a few general rules about futurology: 1, prediction is difficult, especially when it concerns the future. 2, you cannot predict the timings of technological breakthroughs. the best you can do in hindsight is to say this revolution was waiting to happen from when these core technologies were good enough. it does not say when the person with the right need, knowledge and resources will come along. 3, we are totally crap at predicting the social consequences of disruptive changes. people predicted the rise of the car, but no one predicted the near total elimination of all the industries around horses in only 20 years. 4,you cannot predict technology accurately further ahead than about 50 years, due to the extra knowledge needed to extend the prediction being the same knowledge you need to do it faster. you also cannot know what you do not know that you do not know. 5,a knowledgeable scientist saying something is possible is more likely to be right than a similar scientist saying it is impossible. the latter do not look beyond their assumptions which lead them to their initial conclusions. it does not stop there from being some form of hidden limit you don't know like the speed of light or the second law of thermodynamics.
    2
  3. 1
  4. 1
  5. 1
  6.  @RawrxDev  i would like to agree with you, but when i ask why they are sceptical, they don't have valid reasons, and just end up saying "because humans are special", and those who claim fake intelligence basically say "because there is something special about it when done by a human". i generally find that those people effectively remove themselves from the conversation, basically amounting to just the level,of noise from some random fool. i would love to discuss with genuine sceptics with actual reasons what mechanisms could get in the way of agi and asi, but they don't seem to show up. one example could be that when humans think, something in the underlying machinery does something quantum, but then you have the question of what is it about wetware which makes it the only viable method for getting the result, and anyway, how come these pesky expert systems can also get the same result and show the same chain of reas9iing as the expert. i would tend to say that llms and all other statistical ai and block box ai have the issue that they are basically blind alleys for anything but toy problems. there are a whole range of fields where even if they could be shown to produce the right results, their underlying model and the impossibility of fixing wrong answers and security holes just make them unsuitable for the job. agi needs symbolic ai, combined with multiple feedback cycles to figure out not only that the answer given was wrong, but why it was wrong, and what dan be done differently to avoid making the same mistake next time. generally i tend to believe that ai will get smarter, using symbolic ai, and that there is no predefined upper limit for how good it can get, but i would like those who turn up with opposing views to actually have some basis for them, and to actually be bothered to voice them so that actual discussion can occur, rather than just saying "because", and thinking that makes them automatically right.
    1
  7. 1
  8. 1
  9.  @Me__Myself__and__I  no, i was not using it to show my ignorance, but to give a clear example of how the black box nature of the system leaves you vulnerable to the problem that you cannot know how it got the result, and that functional equivalents of the same issue are inherent to the black box nature of the solution. almost by definition, llms specifically, and black box ai more generally have the issue that literally the only way to handle the system getting wrong answers is to surround it with another system designed to recognise previous wrong answers, and return the result it should have returned in the first place, thereby bypassing the whole system for known queries with bad answers, but removing all mechanisms to update the system to get smarter so as to not only avoid the known bad, but reduce the number of unknown bad. it also has an issue of the results being poisoned by bad training data, but my point is the difficulty of detecting when this has happened, combined with the inability to fix the issues fundamentally compromises the usefulness of such systems for any problems which really matter, as in those problems typically you need to know no only that it is right, but why it is right, and you need to know it fast enough for it to make a difference. while i am a fan of ai done well, too often it is not. not only do you need the right type of ai for the right problem, but for non trivial problems it needs to be able to give and receive feedback about what worked and what did not. black box ai leaves you with the only answer to why being because the authority in the form of the ai said so. i don't think that is a good enough answer for most problems, and it really is not for any number of jobs where you might later need to justify not only what you did, but why.
    1
  10. 1
  11. 1