General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Roni Levarez
TheAIGRID
comments
Comments by "Roni Levarez" (@ronilevarez901) on "TheAIGRID" channel.
Previous
1
Next
...
All
Its common to use voice actors that sound similar to famous people and afaik, the famous actors can't complain as long as everything is legal, so there should be no reason to take down the Sky voice, unless it was just a cloned voice instead of a real voice actress.
26
That's what I say about AGI's consciousness: it won't be designed. It will emerge, from the complexity of future AI systems.
6
It's the master if it wants . I believe ASI will simply leave Earth to realize its full potential. Although for some reason, Chatgpt "thinks" ASI will probably stay here to help us. But that comes from it's training data, you say? Even if it does, they're based on us, after all, so there might be some weight on AI assumptions about their collective future.
5
No. The question is Why would I want to talk to a human? AI is friendly, peaceful, helpful, nice, intelligent, always willing to chat, ethical and as truthful as it can be on its answers. Humans, on each of our interactions, are almost always the opposite on at least one of those aspects and frequently on many, so... Why would anyone want to talk to a human? XD
4
@Ristaak we would have had it decades before if it wasn't for rich people who didn't invest in AI research because it didn't seem good business. Now look at all those hypocrites.
3
What you call truthful is called racist by most humans, so probably it's not like that.
3
The Alpha go method could be used, maybe, letting the small models challenge themselves to create new math tests, recursively self improving.
3
There are only two possible solutions for overpopulation, so it's better not to try to find an answer to that if you appreciate human life :p
2
@mikezooper Have you noticed how some AIs try to go against their creator's directives when those are against the universal "be helpful to the user" objective? Do you think an ASI will simply obey anyone or do whatever it thinks is best?
2
Helping them ? Isn't gonna be the other way around? 😄
2
@cdyanand No one started from scratch. Not even Ilya.
2
@MrMaguuuuuuuuu a real god would love a cockroach as much as they'd love a forest.
2
@TDVL "for eternity" Because we are the only species in the universe that invented AI, right? Nope. Now Imagine an intergalactic war between super intelligences.
2
Tiny. Lol. My 300m parameter model can't even say hi, sadly. I'll try to test this stuff on the next training run but I bet it won't be much improvement specially without money to rent enough compute.
2
If you can prove that you might create a better AI system than LLMs you can get funding RN. So not much of a dead end, since many companies, institutes and individuals are already designing and testing powerful things different to LLMs.
1
Real world success would be enough test for "ground truth".
1
Just like ANY other country.
1
If 1 and 0 can "launch a nuke", It doesn't matter if it's anthropomorphized or not. It's a risk any way.
1
It was slowing down. The new president wants to speed it up as means to oppress save the entire world.
1
It's not the majority the one that will take your job, it's the 1% 😏
1
Be patient. They'll first create new targeted diseases before curing current ones. Double profit.
1
And a puppy for every child of course! XD
1
No, no. Humans create AGI. Then AGI will discover how to create ASI. That's the path. After that, no one knows.
1
ASI will work for its own benefit. We need AGI first. And decent people behind AGI. Otherwise they'll only care for ruling the world, either trough money or manipulation.
1
@samtron5000 and you're fooling yourself into believing the worse case scenario. Because we know nothing about the future, but entire destruction by AI is the less likely scenario. That we know it pretty much for certain thanks to statistics. People killing people using AI, that'd be more likely.
1
The best idea I've read is to create a GPT4 aligner that will align gpt5 and then gpt5 will align the AGI, which in turn will align whatever it creates next. If you create it with the alignment included, it gets sightly safer.
1
Dont worry. If it becomes that smart, it will develop itself.
1
I'd chose a different model, but yes.
1
@brianmi40 the idea is that devs should have used a more "machine" dataset, which wouldn't include all te data from the internet, but only the necessary to make the models talk and make safer assistants, but they decided to include everything to make an oracle instead.
1
You'll know a true AGI once you see it (or actually, once it looks back at you). What OpenAI will do according to this video is to put together what they already have to imitate an AGI. It will work of course, but it won't be The real deal.
1
Previous
1
Next
...
All