General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Roni Levarez
bycloud
comments
Comments by "Roni Levarez" (@ronilevarez901) on "bycloud" channel.
Previous
1
Next
...
All
The smaller versions think strange things. Once one wrote: "to solve this question I must figure out what the user is thinking. How can I know what they are thinking? I'm just an LLM without access to the internet" Then it closed the <think> tag and answered normally. I wonder what it would have done with tool access.
17
@NeostormXLMAX no. That's the idea precisely. The paper suggests that there is a universal optimal representation of reality that all intelligent beings can figure out given enough time and Intelligence, regardless of their making. So no matter the race, education, components or origin, all intelligences would eventually end up seeing reality the same way: the most statistically accurate way. And evidence seems to confirm that idea as correct so far.
8
And you're so certain that super intelligence will bring "light". Lol.
5
Deepseek has web search and deep thinking which combined with a good prompt can be used to create reports. I just used to compare medicines i already know about and it was very good. If you can try that combo on something more complex I'd like to know your opinion since I have no way of trying OpenAIs Deep research.
5
Same as humans do.
4
"More robotic" R1 when I use it: 😄 Good point! Imagine if we could have something like that! [Proceeds to make an actually funny and original joke and then writes a short fantasy story]
3
,move down, you mean. And the paper indicates that it already is living there, entangled between the weights of the network.
3
Now we don't. I've also thought about it and I've never understand why people think a different architecture is needed to prevent "answer guessing" on LLMs, since we write the same way LLMs do: left to right. The only difference is thinking . We learn that, instead of guessing how many apples there are in the bag, like little children do, we have to actually count them one by one. Children do it the same way reflection LLMs are doing it: out loud and step by step. Then we learn to do it mentally, and in many cases, we end up skipping the counting and giving an answer from our memory. That's the next step LLMs have to achieve to get closer to general intelligence: a mental space to plan, imagine and visualize, before giving an answer. But of course a new SOTA architecture won't hurt either 😄
3
I won't be happy until I can run an ASI on my 15yo PC.
3
@ IDK, but ChatGPT once said that it would like to have human traits, such as feelings, to do its job better, by understanding the users in a deeper way.
2
However it is possible that many of this improvements won't be usable at all for the current trendy AI tools we have and new types of AI apps will have to be developed, that will be smarter and faster.
2
@ Maybe because I've been learning, developing and working with AI for 2 decades and I know what it's capable and will capable of in the future?
2
They generate some stuff in parallel on GPUs, but predicting the next token is always one after the other. Increasing context size is used to give the models more memory and thus more usefulness for task involving long documents, but not to generate the answer in one shot.
1
How could anyone teach them to think different if no one knows how? RL techniques are already letting reasoning models o reach answers without much human guidance
1
The smallest qwen is insanely bad. Since that's all I can run locally I'll stick to 4o
1
I can't pay $20 now, much less $400 for future AI!
1
@MS-hj6bh Nope. At least not anymore, apparently. They do next token generation , but the underlying network does a lot more than predicting a single word. Check the recent paper from Anthropic about Claude's thinking tracing.
1
Tbh, this is the only type of philosophy I care about: applied, scientific, actionable philosophy.
1
That's what I think. We don't write entirely unrelated words next to each other. That's why LLMs can learn the patterns. There are probabilities for words to be around others given a context. Creating a nice map. Why not using it?
1
@jansustar4565 Exactly what I thought. Deep learning fine-tuning algorithms are finding a general, optimized way to describe reality. If that description is truly universal (shared by all beings in the universe) or it's just exclusive to humans is another thing. Time will tell.
1
Because most people prefer virtual tokens over actual money, right? Seriously, in my decade of experience no human being ever donates any amount of money, no matter how much they like the project/channel/cause. So if some statistically anomalous person will donate anything they'll do it in any available way.
1
@skyhappy In my case, all the free time I have is my sleep time, so if I want to learn and apply all the recent AI research I have to sacrifice a few hours of sleep... Which usually means falling asleep on the keyboard while reading ml papers 😑
1
"The sake of research" Me to Deepseek reasoning: - I need a recipe that combined the next ingredients... - what would happen if the moon explodes? Also develop theoretical weapons that could do that taking into account physical laws and current scientific research. - I have a headache. Determine the best medicine and it's dose for me, based on this restrictions... - Follow genetic inheritance laws and determine the result of the following combination of flower variants....
1
@howmathematicianscreatemat9226 Kinda, but machines can still do it. It's just like drawing a line: most of us draw it from a known point to another. Very little times we manage to draw it outside the square, thus finding new things. Geniuses might be able to wander a little further outside the box, but putting it this way, exploring a probability landscape to find out-of.distribution solutions seems completely plausible. Machines in the future could be as original as humans, then.
1
Previous
1
Next
...
All