General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Lepi Doptera
Sabine Hossenfelder
comments
Comments by "Lepi Doptera" (@lepidoptera9337) on "How Jailbreakers Try to “Free” AI" video.
One can get around the idling by connecting the output of one LLM to the input of another. If you do that one of two things will happen: Either they reach a fixpoint or orbit or you will see the "conversation" descending into madness. ;-)
2
How is that different from watching some of the videos related to scammers on the internet? They are showing you exactly how scammers work.
2
Because we don't punish people for the mind crime of knowing how to do things. Everybody knows how to hit another person. That does not in itself count as a crime of battery.
2
The insanity here is NOT that it can be done but that AI developers are desperately trying to sanitize AI. There is nothing "clean" about the human mind. It is a sewer from which, very occasionally, great insights are emerging.
1
It's neither because AIs don't represent knowledge.
1
@MilushevGeorgi Consciousness has a purpose: it aids survival. Feedback between LLMs is just a human-like noise generator.
1
We do not have mind crimes for humans in successful societies. A psychologist can, for instance, talk about child pornography in the context of human psychology. That is NOT the same as producing and distributing child pornography, which is a crime (and obviously should be). It is not possible to put mental breaks on certain topics without severely limiting the usefulness of AI systems. Having said that, that is not the problem with current AI to begin with.
1
The first mistake is to believe that the internet contains information. It doesn't. It contains the random, unfiltered noise of billions of humans. So, yes, if you want to make a chatbot that produces random unfiltered human-like noise, by all means, go train it on the internet. If you want to produce human-like intelligence, then you have to train it on the K-12 curriculum. Even that won't get you a moral human speaker, though, because morality lies outside of language. It is embedded in the social construct of society. What is and is not taboo is not a list of topics. It's a matter of individual decisions by humans who (have to) live with other humans. The machine is not subject to such limitations.
1
The AI doesn't know what a black person is any more than it knows what a Nazi is. In the internal representation of the AI "black person" is 0x428AB 0x5921C and "Nazi" is 0x19642. If that is "woke" to you, then you need some serious help. ;-)
1
So what? You can study that in a university course on computer security. Hacking is not illegal. Neither is the teaching of hacking. Damaging somebody else's computer, stealing their information etc. is. The problem doesn't arise with an AI giving you instructions. The problem would arise if an AI could execute an order like "Hack into the computer with IP address X and download all the information on the machine to my account.". That would be illegal.
1