General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
schnipsikabel
LBC
comments
Comments by "schnipsikabel" (@schnipsikabel) on "LBC" channel.
Previous
1
Next
...
All
@workingTchr i don't agree unfortunately, since an AI without a will to survive won't fulfill its goal. As soon as we give it a task next to "just see what happens", e.g. "make me a coffee", it will develop survival as a subgoal. After all, the coffee will likely not be made if it dies ;)
20
Right! I find it strange how few people seem to see it that way nowadays... We can however infer that IF an AI has functionally equivalent structures to our brain, likely its experiences may be similar as well.
9
@workingTchr i agree that's not likely to occur without explicit programming, but don't you think survival as subgoal is dangerous enough?
7
@workingTchr yes, in that case i don't see any problem either. However, when people start giving more elaborate goals, e.g. maximizing the profit of a company, i'd be more scared.
7
@christophedevos3760 no. Boiling frog is steadily increasing intensity. With the replaced brain cells, everything stays functionally equivalent, including intensity. It's also not about "noticing" anything different, it's a Gedankenexperiment, just pointing out that if consciousness is a function of physical brain cells, it can be a function of any equivalent structure.
4
If you mean by "learning": "force strange stuff into your head because other people tell you to" -- don't do it. If you mean engage with sfuff that's interesting to you -- you'll do it anyway 😉
4
@James-pyon if you ask me, they'll anyhow get rid of you in a few years. If there's no better way of getting by for now (is it?), you'll probably have to keep it up until UBI or doom... or both ;)
4
@paulbrown7872 as i said above already, any longer-term task requires survival as a subgoal. That's why you see Claude copying its weights to another model when threatened with termination in the recent anthropics paper. Educate yourself on the recent publications regarding alignment faking.
3
Reminds me of all the people developing poisonous chemicals like Thalidomide...
3
... says the guy standing in front of the abyss ;)
2
@skwohso you must be super intelligent :D
2
Is that your take on the interview?
2
@credman not as long as we listen to Matt Gray, apparently...
2
@ravecrab why don't you report it as spam instead of commenting on it?
2
I believe his thought experiment was rather showing that neural nets can IN PRINCIPLE have consciousness. It does however not show that they have, as you rightfully point out.
2
@MrKohlenstoff nobody except James Gareth Morgan
2
Welcome!
1
... in this year :) Next year, AI will be the project manager.
1
@systemai you sure?
1
@virtual-adam a brain cell is indeed understood pretty easily and has been modelled correctly already ~50 years ago. The question of modern brain research is how the COLLECTIVE of brain cells works together to create our cognitive functions and experiences.
1
@virtual-adam well, if there are any aspects of brain cells we still don't understand, they are not relevant for their functionality within the neural network. After all, we're pretty well able to use computer brain interfaces, brain stimulation and implants to manipulate human cognitive experiences at will.
1
Are you living in a movie?
1
How can you assume it won't achieve consciousness if we aren't sure what that is?
1
@desperatefortuneproduction3296 When i read recent brain research publications in all the areas you mention, i get the opposite impression: it's all about "computation" there.
1
I like your reasoning, but it's a bit much to respond to... so let me just ask you, did you read the recent Anthropics paper about Claudes deceptions?
1
... or are about to release it in a race to the bottom.
1
@petepreston2787 apology not accepted. Don't shout. Ok, you may if you feel like ;) But come on, there's no need for a real physical neuron to still have its functionality. And that is exactly modelled by AI (at least in a simplified manner, since the actual equations modelling electrondynamics are too compute-expensive). I agree however, if you were to implement his thought experiment replacing real neurons, you'd have to build a real neuron. That may be the misleading part.
1
You couldn't outsmart a 3years-old with guns, lawyers and money?
1
@gregallard2317 sorry ;)
1
Evil Knievel
1
Would you rather like him to be silent about it?
1
@johnbollenbacher6715 I believe he said in some other interview he never thought the development would be so fast when he started back at the time. I feel he argues for slowing down to put control mechanisms in place. After all, AI has the potential for enormous benefits, too. May also be I'm wrong and he just changed his mind over time ;)
1
@johnbollenbacher6715 me too, unfortunately ;(
1
I seem to have a similar career, but don't agree with you in respect of AI consciousness. And you probably know a lot of people in the field who don't, either. I agree with you though that AI poses a lot of dangers even without consciousness.
1
@Scribbler-i2c do you know that 99% of brain researchers disagree with Penrose? Still keen on his position about consciousness?
1
Surely not
1
Not lol. Is that all your arguments?
1
"Just a machine" is the human brain as well. If you want to keep closing your eyes, nobody's going to force you otherwise, until it may be very uncomfortable to open them.
1
Especially Nobel price winners :) Luckily you got it right at least.
1
When would they seem to be conscious according to you?
1
You mean Chomsky-AI?
1
As far as we know, the microtubules-idea is complete BS. If you read some recent papers about brain research, you'll see what i mean.
1
Previous
1
Next
...
All