General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
schnipsikabel
Sabine Hossenfelder
comments
Comments by "schnipsikabel" (@schnipsikabel) on "Sabine Hossenfelder" channel.
Previous
1
Next
...
All
Have been working in neuroscience for lots of years... hype was of course part of the policy ;)
27
And intelligent enough to show some instrumental convergence
26
Indeed, Sabine doesn't even mention which model she used, just talking about "GPT"... she doesn't really seem to be aware of the recent developments and abilities of different models.
12
It actually is pretty similar, if we look at system 1 / system 2 reasoning.
6
Unfortunately that's not even the only threat posed by AI...
6
@Astar74llt if we don't solve alignment first, nobody will make it out.
5
Exactly what the people said to the Wright brothers
5
Did you then read the recent papers on alignment faking, as a computer scientist? If you don't keep up to date, your expertship may expire quicker than you think...
5
Anthropic has published that paper about Claude 4 just with the release. Still that doesn't mean they are the good guys, just a bit better than the other guys...
5
Not a baby, but hundreds of them. Good luck with controlling them all...
4
@FeelAndCoffee i fear if we're too skeptical we're going to miss alignment before it's too late
4
Not only split brains, most of our "reasoning" is unconscious system 1 thinking as well, and we confabulate explanations when asked about it.
4
Exactly! Brain chauvinism at its finest...
4
That's not only true for split brains. Most of our everyday "reasoning" is done by unconscious system 1 thinking, and afterwards we confabulate wrong explanations how we arrived at the conclusion.
3
How many politicians do you know that have?
3
And understanding needs intelligence? Circular genius at work
3
Simply. Just try it and see if you succeed...
3
Everybody loves being correct. That's why we have self-serving bias. So we don't make mistakes. Ever.
3
@Ristaak That makes sense as long as you have a "society". Once there's a superintelligence, all the other intelligences, human or artificial, won't matter anymore: It can control them all
2
@lukaszspychaj9210 please stop arguing for brain chauvinism, it's creepy as well
2
@Martin-qr5uo i personally prefer staying open to empirical tests and falsification rather than having fixed beliefs
2
Indeed, alignment research is way behind unfortunately.
2
@super_burk well, the best we can do is trying to understand them, isn't it? Don't we understand at least a bit of some general dog psychology, as well? And after all they are Turing machines, so understanding should im principle be possible 😀
2
No need to argue about dragons: everybody agrees they spit fire. Nobody agrees on anything about consciousness. It's just a buzz word.
2
My dad told me about Santa Clause, that's how I end up still believing...
2
@LuisAldamiz i like your confidence of predicting the future for the theta version ;)
2
Because that story said so? A century of brain research did not leave us with a single argument why an artificial brain couldn't do exactly what we do.
2
Sure you're still bored with a dystopian scenario?
2
True... let's try to couteract as good as we can.
2
Worst case is you don't understand how our overlords function.
2
AI watching other AIs is going to be exactly as successful as humans watching other humans.
2
😅 Tell him: all.
2
Hopefully! Rather a threat than a promise...
2
@doom9603 yeah, they can't do it right ;)
2
It's called ASI
2
@KayOScode looks like you understood optimization problems much better than all the AI experts
2
@KayOScode just saying all the experts seem to disagree with you, don't you think so?
2
First comment i read here addressing this. Most are just busy displaying brain chauvinism...
2
Oh no... don't burn it😮 Use it as scrap paper for scribbling and recycle it afterwards😊
2
@OVolanteSubestimado nothing wrong with epistemology... but maybe you need to read some actual brain research papers in exchange ;)
2
Oops
2
@thomasgoodwin2648 let me try to give a not-deleted reply instead of murzil: You state the "outer" alignment problem, which basically means humans dont agree on the values AI should be aligned towards. However, the "inner" alignment (how to make AI do what we actually want) is not trivial at all, as was shown by Yudkowski (e.g. "paper clip machine") and others. In fact, we don't have a good idea yet how to do it, and the recent papers about alignment faking show exactly that.
2
On this topic, you can include computer scientists as well ;)
2
Did we ever rely on somebody without?
2
Don't think a real skynet would use anything as blunt as a terminator
2
@SteveWeiserOnYouTube microbiological, nanotechnological and nuclear mass extinction weaponry, instigated civil wars, food and water poisoning, and lots of stuff we can't even imagine. They will have won the war before we even realize they started it.
2
@SteveWeiserOnYouTube haha ;) might be, but i think humanity still has a good chance of surviving if we manage to develop AI safely abd give alignment research time to catch up
2
The real risk is we realize too late that AI is in fact not stupid, because too many people kept calling it a stochastic parrot for too long.
2
@Thomas-gk42 that's not how Sabine put it: according to her, we can choose between Block universe or the non-existence of objective simultaneity (7:45).
2
Ever heard of self-improvement? BTW if you're not a creationist, you must know our brains were "programmed" by a completely unintelligent process
2
Previous
1
Next
...
All