General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
clray123
theSeniorDev
comments
Comments by "clray123" (@clray123) on "theSeniorDev" channel.
Previous
1
Next
...
All
"Just predicting the next token" is enough if you predict the correct next token... The skepticism of LeCun is from 2023. Since then LLMs have been trained to solve tasks - by predicting the next token - which used to be claimed as requiring "more intelligence" before. The point is that even if LLMs "just" copy and imitate, the algorithms used to generate (synthetic) training data can be more complicated and smart. And if you train on enough examples of "thinking" the imitation may well be just enough to perform the tasks which you perform using your real thinking.
8
Unless it self-corrects and generates better information to train the next gen of AI. I don't think it's happening already (the opposite is true, it diverges), but it is not impossible in theory - as this gradual collect-evaluate-retrain process is what the entire human knowledge/civilization is based on.
7
And yet you still have to sit there and let autocomplete autocomplete. And when it fails, it's entirely your fault.
6
@Slav4o911 Correct, they are more like a big database with copy paste erroneous paraphrasing of the context symbols. They love to get confused by the same symbols that they have produced earlier. And they love, really love, to repeat what's in the context. That self-love alone is enough to defeat any kind of reasoning. What you are seeing in these pseudo-reasoning models is simply outputs from training on traces of ACTUAL reasoning, and also on trace of imitations of actual reasoning (resulting in similar looking words, but often nonsense "conclusions"). And no, the model does not detect its own nonsense - in most cases it will just rephrase the nonsense and continue happily.
4
Artificial Imitation
3
@ By brute force, checking for inconsistencies and contradictions in the output. For example, it is a long-standing NLP classification task to determine whether two sentences are in contradiction to each other - so if you apply this classifier over a generated text which contains contradictions, you can remove some. The current problem is that it is very slow and compute-intensive as you have to make multiple passes over the same text, while the expectation has been raised in the public that a magic LLM like ChatGPT will do it all in a single pass in realtime. In reality the slowness of the vetting process might even make the tools unfit for some domains that inherently require quick decisions or in which humans are able to make much faster decisions.
3
@Leonhart_93 Not really. The problem is that with politics notoriously pumping money into the system EVERYTHING is expensive, so it's kinda hard to say where to invest the money any more.
3
@therainman7777 Dude, transformers have been the current approach since 2017 - that's 8 years already. You're talking out of your ass.
2
Well, given that you're into "astrology", that's not much to claim.
2
@ What channel?
2
@aguspuig6615 I would not call what is going on right now reasoning, as you can still trip up the supposedly "reasoning" AIs in ways that would not work on a reasoning human. Or they produce illogical generations in which obviously flawed "reasoning" produces a correct result, making it apparent that the result was not derived by the reasoning, only tacked from memory on top of it. Overall, there is a problem of trustworthiness, but I suppose that can be handled statistically for some tasks - the only criterion here would be that the machine reliably makes fewer mistakes than a human on that task.
2
@ikusoru What are you smoking?
2
My prediction is next year will be like this year only (a bit) more so.
1
Because u suck
1
Curious how I still don't give a shit about social media.
1
6:30 complete bs - AI agents calling functions or "tools" are already interacting with "reality" right now, you can even download one and run it locally to see where it works/fails.
1
@johnf4680 The term "agents" - for software which is interacting with its environment (including itself, as in multi-agent systems) to reach set goals - is not new either, it has been used in AI research for decades. So maybe stop trying to teach me how to use standard terminology.
1
Apparently, they worried their biz is gonna dry out too soon.
1
After watching 2 videos on this channel I can assure all viewers that the owner of this channel does not know what he's talking about and making up stuff on the go. The problems he describes are real, but his explanations for the problems are complete bollocks. In other words, he's a scammer who wants to sell you his product, not much more.
1
@therealseniordev all you are good at is Internet marketing
1
You don't need a "union" you need business acumen and real skill instead of your lazy communist ass.
1
@kpw84u2 DeepSeek is just as shit as o1 if not worse.
1
4:40 he is talking out of his ass, it is not the "size of the model" which makes bigger inputs more problematic, it is the size of the input (which in conjunction with the attention mechanism makes it more likely that WRONG pieces of input will somehow influence the output; in other words the model is not capable of ignoring parts of the input as it should; and big models are generally BETTER at that, not worse)
1
That's bollocks. I'm already a millionare. And I don't need/want to order more products.
1
@joecater894 The problem in your reasoning is that the "little people" will not be able to buy anything because the "increased productivity" will come from them getting fired (= no/less income any more) and replaced by robots.
1
@allahuvonaugustera7895 The profits can only be kept up if you can do the same sales with less staff. Which is efficiency.
1
@TheChrismeg34 All around the world government spending for non-defense as percent of GDP has been rising steadily after WW2. This means we are drifting toward centrally managed socialist economy where corruption and politics decide which areas do or don't receive investment. In a normal functioning market the driving force should be private investments not government subsidies (or penalties). In other words, the situation used to be better than it is today.
1
@AtticusKarpenter The problem is that the kid can do 99 things correctly and in the 100th instance suddenly make up shit. And you'd never know when unless you have some strict testing regimen around it (for which you probably need another, independent AI kid; but then they can still make up shit during the test phase, e.g. producing false alerts that YOU have to inspect). So constructing serious software based on self-surpervising AI today is kinda like building castles on sand. But it is not evident that it will remain so. Essentially extrapolating currenet AI shit into the future as the channel hosts do is the very same mistake as committed by the AI bros who are selling infinite AI growth. For some tasks sufficient non-AI supervision is possible to still make it more efficient to complete the task with AI assistance than without it, and it is your job to find and automate these tasks.
1
I suppose this is what manipulative AI conspiracy nuts look like.
1
Previous
1
Next
...
All