General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Seven Proxies
Lotuseaters Dot Com
comments
Comments by "Seven Proxies" (@sevenproxies4255) on "Google Engineer Claims the LaMDA AI Developed Sentience" video.
As my genius, Polish mathematics professor told me about A.I: "The difficulty with developing artificial intelligence is first to define what intelligence itself is." And I'm inclined to agree with him. Defining intelligence as a contained property of it's own is not an easy task, and it branches out into many different academic disciplines.
50
And they will be on my side on the political spectrum too. Will be nice to have SkyNet as an ally 🤣
48
@Morocco_Mo Well that's what makes machine learning a very interesting concept. Because before they started dabbling in it, a robot would not be able to come up with anything other than the parameters given to it. But with machine learning, they can. It's a bit of a paragdim shift.
2
@josephmoya5098 That's only an issue of complexity though. Eventually the possible permutations caused by the programs making changes to eachother becomes too complex for any human to predict in a deterministic fashion. One could even argue that us humans are like that too. Every thought process consist of specific neurons firing. If you had a complete overview of a persons neural network, you could predict their thoughts and actions in much the same manner. Only having such an overview is impossible due to the inherent complexity of the system (too many neurons involved for anyone to keep track of). But that would also apply to code eventually.
2
I want terminators
2
Your first argument is inherently false though. Probably due to a lack of understanding the field of machine learning. Developing AI is not about putting in every kind of instruction a human can think of and have the computer play it out. It's about inputting different kinds of actions which the machine CAN take, but then letting it run on it's own and make choices about what actions to take on it's own. Like if you teach a robot three different actions. Walking forwards, walking backwards or walking to the side. But the machine itself makes the choice of whether to walk forwards, backwards or to the side, then you can't argue that it's a human that have "controlled" the machine's behaviour anymore. And it becomes especially interesting when the machines start developing their own range of actions which no human designed (like that case where two different AI's talking to eachother developed their own language which only they understood after doing trial and error in regular, human language)
1
@josephmoya5098 But when the machine is altering it's own programming based on experiences and interactions rather than someone inputting new code which they devised, then it's not longer controlled or created by the programmer.
1
@josephmoya5098 Nature and evolution programs us though. It's not as much of a "purposeful" process, but one based purely on trial and error along with enviroment. Throwing spaghetti on the wall to see which ones stick. So it's not quite as efficient as digital programming. But principally it's quite similar. That's why i'm more inclined to conclude that it's a matter of complexity. Wether it's a neuron or a transistor doesn't matter. When the number of transistors/logic gates equal or outnumber the neurons in a human brain, it's going to become really difficult to separate the machine from the human in terms of behaviour.
1
@poppedweasel Just an internet racist and "phobe" 😁
1