Hearted Youtube comments on AI Search (@theAIsearch) channel.
-
5100
-
2500
-
2400
-
1400
-
1400
-
1300
-
1300
-
1200
-
1200
-
1100
-
948
-
906
-
So I made some discoveries which could help some people.
-Chunk: You should increase this if you experience any distortions or voice lag when gaming with this. This adds more graphics processing time, and if you set it longer it wont rush out bad audio.
-Extra: This setting gives bonus CPU usage to help iron out the audio. I found that sometimes the changer would translate an F sound to an S sound, but adding a bit "extra" CPU to it (like the 8k setting or higher) fixes the problem. You don't want to max this out unless the only thing you're doing is using your voice, as maxing it out will use all or nearly all of your CPU.
-Noise: I recommend using Sup2 option if you have an Air-Conditioner or other background noise. Sup1 didn't work as well and is probably for a different frequency range, so your millage may vary.
When you start real time voice changing, you'll see some info in a box with millisecond timers. The thing to watch is the "res" time. If this time starts going up, from around 300ms it starts rising to a thousand then two thousand etc, this means the computer is unable to get the voice processing out in time, its being pushed back in priority you could say. The fix is to increase the Chunk, this will give it more time to process with the remainder of your resources, and switching it you should see the number start decreasing rapidly. If it doesn't just raise it even higher, and also again keep in mind that if you're doing something that is CPU intensive, you need to keep the Extra setting fairly low (like around 8k).
I have a powerful computer (10 core i9 10900kf, with a reference 3080ti), and I found that if im going to play a "serious" game like GTAV or StarCitizen etc, its best to have the Chunk as high as 192, or 256, with the Extra set to 8192. If you're just on discord, or playing some very light game, you can crank the Extra up, and reduce the Chunk to maintain high quality audio but process it considerably faster.
Hope this helps someone!!
Good luck o/
695
-
617
-
505
-
499
-
478
-
457
-
440
-
420
-
404
-
387
-
344
-
331
-
I've been experimenting with this for a bit, and I'm disappointed by how vague and incomplete the English documentation on these settings is. In an effort to remedy this, here's my breakdown of each setting:
Response threshold: Controls the noise gate. Any sound below the threshold is suppressed. This is used to prevent background noise and hiss from being turned into strange mumbling. Equivalent to "S. Threshold" in w-okada. Not applicable in RVC WebUI.
Pitch settings: Applies a pitch offset to your input voice. Every multiple of 12 setting increases or decreases the voice by an octave. Adjustments by 1 increase or decrease by a semitone. Using whole octaves is primarily used to ensure you can sing in the same key. Equivalent to "TUNE" in w-okada. Equivalent to "Transpose" in RVC WebUI.
Index rate: When an index file is provided, this slider augments the target voice by preserving more of its accent and less of the input voice (to reduce tone leakage). This is particularly useful for voices trained with a low epoch count (around 200-ish or less). If set too high, it can cause strange pronunciation artifacts. I usually find something around 0.30 to sound good, but it varies by voice model. Equivalent to "INDEX" in w-okada. Equivalent to "Search feature ratio" in RVC WebUI.
Loudness factor: How little to preserve the loudness of the input performance. At 0, the loudness of the cloned voice should match the loudness of the input voice. At 1, the cloned voice will always be at full loudness. 0 is useful if you want to distinguish between whispers, talking, screaming, etc. 1 is useful to have the cloned voice always speak loudly and clearly, as loud as the loudest things it was trained on (which can have artifacts such as mic clipping depending on the training set). Values in-between provide partial volume control biased toward being louder, the closer you get to 1. There is no equivalent in w-okada. Equivalent to "volume envelope scaling" in RVC WebUI.
Pitch detection algorithm: Different algorithms are better at different things. rmvpe is the current state-of-the-art and works fastest and usually with the highest quality. Equivalent to "F0 Det." in w-okada. Equivalent to "pitch extraction algorithm" in RVC WebUI.
Sample length: The realtime voice changer works by sending small chunks of audio for quick conversion, then stitching them together. Longer sample lengths feed in longer chunks, making the stitches less obvious and reducing GPU requirements but increasing output latency. On a low end GPU, setting this too low will make the GPU unable to keep up and produces stutters. On a high end GPU, setting this too low will cause warbling as an artifact of stitching many overly-short chunks together. Equivalent to "CHUNK" in w-okada. Not applicable in RVC WebUI.
Number of CPUs: Self explanatory. Note, however, that rmvpe is a GPU-based pitch extractor and should be relatively unaffected by this setting. There is no equivalent in w-okada. Not applicable in RVC WebUI.
Fade length: The length between chunks to crossfade together. Longer may reduce warbling. Equivalent to "overlap" in w-okada advanced settings. Not applicable in RVC WebUI.
Extra inference time: How much old audio to load into each chunk. The extra context usually improves voice quality for the generated chunk but is more demanding for the GPU. Equivalent to "EXTRA" in w-okada. Not applicable in RVC WebUI.
Input noise reduction: Attempts to remove non-speech background noise from the input to prevent sounds from being turned into strange mumbling. Equivalent to "NOISE" in w-okada. Not applicable in RVC WebUI.
Output noise reduction: Applies the same noise reduction to the output voice. Possibly good for poorly trained voices with lots of background noise. There is no equivalent in w-okada, but the usefulness of this setting is dubious. Not applicable in RVC WebUI.
Input voice monitor: Lets you hear the voice audio being passed in to the voice changer, sent to the target output device. Useful to ensure you are passing in the audio you actually want or to passthrough your audio without voice changing. Comparable to "monitor" settings in w-okada. Not applicable in RVC WebUI.
Output converted voice: Outputs the voice conversion to the target output device.
Main features RVC realtime has that w-okoda doesn't:
Loudness factor controls. W-okoda seems to always use a value of 0.
Significantly lower CPU usage at equivalent performance settings, in my experience.
Main features that w-okoda has that RVC realtime doesn't:
No system to save model presets.
Input/output gain is missing.
Input noise reduction is less robust compared to w-okoda, which offers echo reduction and multiple noise suppression techniques.
Unlike w-okoda, you cannot passthrough to the input mic, instead requiring the use of virtual audio cable to pass the cloned voice into voice calls and microphone recording programs.
In w-okoda, when the mic loudness falls below the response threshold, the tool is paused until speech is once again loud enough, saving GPU and CPU resources. RVC realtime always passes audio whenever it is running.
Unlike w-okoda, you cannot monitor the cloned voice while outputting it. You can work around this by using the "listen" feature in the Windows sounds panel on a virtual audio cable instead.
No built-in recording functionality.
Missing most of the settings in the w-okoda "advanced settings" menu.
No way to choose which GPU to run the voice model on. You can get around this by setting CUDA_VISIBLE_DEVICES=# in a terminal before launching the tool from there, where # is the index of your target GPU (0, 1, 2, etc.).
319
-
263
-
261
-
229
-
205
-
193
-
171
-
169
-
161
-
138
-
135
-
5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.
Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron—while binary in the sense of action potential—carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.
132
-
129
-
128
-
125
-
120
-
117
-
116
-
111
-
110
-
106
-
100
-
91
-
90
-
89
-
88
-
I just published on my tiny channel a full Album, 10 consistent tracks, of a 70's rock band, all this created with Udio.
This new AI is stunning and thank you for bringing it to us :)
But Udio as a way more user-controlled approach to song writing.
This is mainly due to the fact that in Udio you create the song in 33 seconds chunks and, selecting the best one, extending it, tweaking the prompt for each extension at your will, you can build up a song with the structure you want.
Now Udio has new features of "inpainting", cutting, and so on and the control you have is incredible.
My personal opinion about generative AI, as a musician and digital painter, is that AI creative tools must give you the full control of the creation process. Creating a whole song from a single prompt is COOL, but apart from the prompt, the creator did very little.
For a comparison, Midjourney is by far the best image generator, but with Photoshop + Firefly you can create what you have in mind in a much more controlled way. Just my opinion, of course :)
88
-
85
-
85
-
84
-
82
-
76
-
75
-
73
-
70
-
68
-
65
-
64
-
62
-
59
-
59
-
59
-
59
-
58
-
58
-
57
-
57
-
55
-
55
-
52
-
50
-
47
-
46
-
42
-
41
-
40
-
39
-
39
-
38
-
38
-
37
-
36
-
35
-
35
-
35
-
34
-
32
-
32
-
32
-
30
-
30
-
29
-
28
-
27
-
27
-
27
-
26
-
26
-
26
-
25
-
25
-
25
-
25
-
24
-
24
-
24
-
23
-
23
-
23
-
23
-
23
-
23
-
23
-
23
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
21
-
21
-
21
-
21
-
21
-
21
-
21
-
20
-
20
-
19
-
19
-
19
-
19
-
19
-
19
-
18
-
to save time, it gets interesting at 30:00. Qualia are the subjective, first-person experiences of sensory perceptions and thoughts, such as the way colors look, how sounds sound, the sensations of pain, or the taste of food. They represent what it feels like to experience something, the qualitative aspects of consciousness that are not easily described or measured by science. The concept of qualia poses challenges in the fields of philosophy, psychology, and neuroscience because these experiences are deeply personal and cannot be directly observed or quantified by others. Debates about qualia touch on questions about the nature of consciousness and the mind-body problem, including whether qualia can be fully explained by physical processes in the brain or if they point to something beyond our current scientific understanding.
18
-
18
-
18
-
18
-
17
-
17
-
17
-
17
-
17
-
17
-
17
-
16
-
16
-
16
-
15
-
15
-
15
-
15
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
9
-
9
-
2:16 doubt it. Using it to cure cancer, yes, probable. To stop aging, maybe. But changing one's already developed features just through modifying DNA, doesn't make sense. Changing things like Eye color, primary or secondary sex characteristics, or things like that, doesn't and will never work through just editing DNA. I mean, firstly, you can't just edit the DNA of millions of cells, and even if, if you have bones of a specific size or "a Johnson", they won't disappear or get smaller just through editing the DNA, especially bones. A "Johnson" could grow smaller (like it sometimes happens because of certain hormones), but never disappear or change into female genitals. Features can grow, but not shrink (a little bit maybe, but not significantly). The cells are already there and the entire structure has already been built. If you want to change your primary sex characteristics, you'd still have to go through surgery and remove what you have, to then get something else (But I think with stem cells and stuff it could be possible far in the future to remove what you have and grow something new). Or if you want to grow smaller, you'd still have to remove the bones that are already there (editing the DNA could possibly make you grow more or make you stop growing, but not make you shrink). Editing the DNA isn't magic. It just changes what cells do, how they copy and what proteins they produce and stuff. It changes stuff on a cellular level and it changes how the body develops in the future.
Also, in 2:31 you would change sex and not gender. Sex and gender are 2 different things. Gender is not changeable, you'd have to reprogram the brain for that. So you would change sexual characteristics and not gender.
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
7
-
It's a complex matter. Would UBI address the varying needs and wants across different people (i.e., is it practical, or is it just an ideal like communism?) Who will pay the UBI? How will money flow? Who will have the power over money and sociopolitical decisions if UBI is implemented? How would these people, or entities, be held accountable and audited? Would democracy be compatible with the potential scenarios? We gotta be careful with the decisions we make, and where we head, as they could have considerable impacts over our freedom and possibilities, which capitalism and free markets address well. Lets take time to step back and consider the big picture, carry out a thorough assessment of risks versus benefits and take well informed decisions going forward. Personally, I believe that if we make poor and uninformed decisions, falling for accepting easy money (UBI), society could be deliberately (yet unknowingly) be signing up for reduced liberties and increased control and surveillance. We shouldn't be delegating our own fates to governments, whose main purpose is to control and enforce society. Long live freedom of choice, speech, finances and equal opportunities. Think about it.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
Some really helpful stuff in here! In the end it's all taste, but personally I think the stuff it generates when you do simple prompts like pop/country/folk/etc is really generic and boring. I mean, I could listen to any playlist on Spotify and get the same cookie-cutter music. You get much more interesting stuff if you give it one or two genres and then add descriptors. Words like Ethereal, Emotive, Pensive, Melancholy, Joyous, Mechanical, Innovative, Strange, Uptempo, Downtempo, etc will give you much more interesting music. It's hard to really control where it's going using many words, but honestly I think that's the fun of it.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
@theAIsearch Thanks for getting back to me! I managed to successfully finish my voice model, and it turned out perfectly! Though in my two training sessions combined, I think I accidentally ended up training it with around 600 epochs...at least the model sounds pretty good, haha. 😅
Some info for anybody else reading this, if it's helpful:
- Definitely don't use the default setting of 500 epochs, or you'll hit the GPU limit halfway through. I personally hit the GPU limit at around 340 epochs. If you hit the limit, Colab won't let you connect to a GPU again for ~24 hours.
- Can confirm that if the training gets interrupted, you resume by running the dependencies again, setting the training variables to the same as in the previous session, then skipping down to initiate "Load preprocessed dataset files from Google Drive" and inputting the step count as 2333333.
- Colab doesn't remember how many epochs you trained in your previous training session before disconnecting. So let's say for example, you disconnected at 100 out of 200 epochs in your first session. When you resume again later, you should set the total_epochs as the remaining number of epochs you need (100), rather than setting it as 200 again. (Learned this the hard way...)
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
Hi. Just wanted to say, that I've managed to make Udio create really catchy pop songs, for example in the style of an extremely popular swedish quartet, complete with very authentic harmonies, so Udio IS certainly totally usable for pop, although it may take several attempts (but so does most genres) - Some keywords I use are: melodic, catchy, harmonies, europop, sing along and progressive pop - I've even tried "earworm"
Always making sure to have at least one of these tags near the beginning of the tag list, to make sure it doesn't get ignored.
I've even had Udio throw in a children's choir - without being asked - when it fit the outro of a song, and afaik there's no actual option to request a children's choir.
With regards to using "manual mode" or not, I'm more ambivalent than you, having found that Udio often completely drowns your imput in favor of all the extra suggestions and tips it insists to have. At other times, though, it works just fine.
I've also finally worked out how to use the "extension placement" option: you mark everything that you want to REMAIN in the song, so the trick is to end the marking just before any unwanted excess material that may not match the part you're trying to add, like the "aaaah" in your example. Unfortunately you can't zoom, so it won't be ultra precise and the marker can be a bit finicky when selecting, but I've managed to save a couple of problematic songs this way.
5
-
5
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I came across one of those videos, highlighting stuff from the CES 2024, and when I saw this, I ran to my PC and pre-ordered one, I have been around at the start of many "new" technologies, consumers define where these products will go, R1 is just the infancy, of where this will go, yes it will evolve and within a few years will be something completely different, and you know what? I want to be a part of that change. I do feel that personal agents/assistance, with loyal AI (Loyal to consumer/user) is the future. R1 is the 1st step in that direction.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I've managed to invest about six minutes into your video, and I must say, I'm thoroughly taken by the quality you've put forth. There's a suggestion I'd like to extend - consider enhancing your audio experience, perhaps by updating your microphone or opting for an AI voiceover service like Eleven Labs.
What strikes me as praiseworthy is your straight-to-the-point discussion of each plugin, thoroughly explaining its function. This directness is a solid strength.
Shifting gears, having perused through your channel, I did come across the weekly beauty clips. I reckon it's important to maintain a consistent and professional image. It might be a tad uncomfortable for some viewers when beauty - be it feminine or masculine - is overly accentuated. It's not that promotions are problematic per se, but the approach can seem a bit peculiar, at least from my perspective.
With that said, your video content is commendable overall. My counsel would be to keep up with the informative material but perhaps inject a bit more of your personal flair, in a balanced and appropriate manner. Remember, personality can often be the secret sauce in creating engaging content. Great job, and keep up the good work!
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
🎯 Key Takeaways for quick navigation:
00:00 🤖 Introduction to Udio and its limitations
- Udio is a music generation tool that allows you to create songs with a single prompt.
- Some people have complained about limitations of Udio, such as difficulty customizing the song, inconsistent results, and inability to repeat melodies/choruses.
- However, the speaker claims you can actually do all of the above using Udio.
00:42 📝 Customizing lyrics inUdio
- The speaker recommends writing your own lyrics instead of letting Udio autogenerate them.
- You can break up the lyrics into sections like verse, chorus, etc. and import them into Udio.
- Udio supports various metadata tags like intro, bridge, outro to structure the song.
02:16 🔁 Repeating melodies and choruses in Udio
- You can get Udio to repeat the same melody for verses and choruses by using the "repeats" tag.
- Matching the number of syllables in repeated sections helps Udio maintain the same melody.
- Adding instrumental breaks and outros can also help control the song structure.
04:18 🎨 Customizing vocals and harmonies in Udio
- You can use curly brackets to make the singer echo certain words.
- Udio can also generate speech/voiceovers and even stand-up comedy routines.
09:18 🎸 Effective music styles for Udio
- Certain music styles like country and bluegrass work particularly well with Udio.
- Other settings like BPM, key, and time signature don't reliably work in Udio's prompts.
- Keeping the music style simple (e.g. one or two genres) tends to yield better results.
15:24 🤖 Sponsor: Synth Flow AI Assistants
- Synth Flow allows you to create customizable AI voice assistants for tasks like customer support.
- You can choose from various voices, clone your own voice, and automate call handling features.
- Synth Flow offers flexible pricing and white-labeling options.
25:01 🎶 Effective music styles for Udio (continued)
- Country and bluegrass styles work particularly well with Udio, producing realistic and well-mixed results.
- Broadway musical style also works effectively with Udio, generating dynamic and expressive vocals.
- Genres like pop, EDM, and R&B tend to yield more mediocre results, where Spleeter may be a better option.
26:08 🚫 Limitations of Udio
- Udio cannot generate songs in the specific style of a famous artist, as it will replace the artist name with generic keywords.
- There is no reliable way to control settings like BPM, key, or time signature through Udio's prompts.
- Udio may struggle to pronounce certain acronyms or hard-to-pronounce words correctly.
27:32 🔠 Controlling pronunciation of acronyms and complex words
- Separating letters of acronyms with hyphens can help Udio pronounce them correctly.
- Using phonetic spellings for difficult words (e.g. "kinwa" instead of "quinoa") can also improve pronunciation.
- Adding hyphens between words can introduce pauses to improve the phrasing and delivery.
Made with HARPA AI
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
14:49 No dude, that was a huge difference with the index set to 1. Without the index, while it does sound a lot like Markiplier, you can also hear this kind of auto-tune artifact going on if you pay close attention. When you turned that index on at 1, you immediately just sounded like Markiplier. I probably wouldnt' be able to tell the difference.
He has a certain growly reverb to his voice, which is honestly probably an affectation, and the software was clearly trying to imitate that. Without the index though, it seemed to be spending too much time on a given artificial tone, making it stick out more. With the index, it seemingly sped up the reverb, and the pitches it switched between were much closer, so it just sounded more authentic.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I hate how every solution to this is I've seen online is to use the Virtual Audio Cable...when the other, simpler option is to just use an 3.5mm audio cable from an output you don't regularly use including your motherboard's built in speaker jack (which is often, most of you hear via Bluetooth or HDMI/DP out) and then feed that into your computer into your motherboard's built in AUX In slot. From there, just set your input to your proper mic, your output to your speaker jack, and in Discord/Whatever, set your mic to Realtek High Definition Audio or whatever your built in microphone handler is.
If you have multi-monitors, most monitors have an Aux Out 3.5mm, and since you're not going to listen to both at once, using one as a free output also works. Odds are you all have a 3.5mm cable from an old pair of headphones or the ones that came with a pair of bluetooth ones laying around, but you can get a cheap one from the dollar tree or whatever. Yes, it costs some money, but it also doesn't put dodgy software that may run adware of spyware.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Here are the jobs this type of technology can remove if improved: [! This is not a complete list. !]
Most likely:
- Warehouse jobs
- Stocking grocery shops
- cashiers
- factory work
- Clothes shops (although the human contact and fashion tips a human can give you might prevent total substitution, but nothing stops an Ai from figuring out the perfect outfit for you and for the moment you needed it)
- Dish washers
- waiter (Although the human contact especially in elegant restaurants will most likely prevent total substitution)
Medium likely:
- driving jobs** (Taxi, delivery, trucks)
- Structure building (building homes, offices, shops, bridges, etc... ) [not design --> although other specialized Ai could do this. ]
- Cooking (mostly fast food, restaurants will most likely have the main Chef keep the job while other assistants and dish washers are removed.)
**Yeah, autonomous car exist and might be the most likely way driving jobs are substituted, but you can't negate a humanoid robot can learn to drive and do it well, you know... like a human. especially fitted for truck driving, can load and unload the truck, repair it and drive it.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I wouldn't want people I admire to be lied about with this tech, any more than anyone else would. I guess the counter argument is the world we see is already so fake, what difference does it make. Fake newscasts, fake Insta posts, fake celebrities, ect. But the real issue is not being able to tell the difference between fake and real, so to quote Morpheus "Have you ever had a dream, Neo, that you were so sure was real? What if you were unable to wake from that dream? How would you know the difference between the dream world and the real world?" So in conclusion, you decide what to believe in, and it becomes real to you. Thanks for listening to my TED talk 😋 Sure would love to play with these tho
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
AI is going to produce a collapse, no doubt about it. sam altman its not an angel, he betrayed elon and kept the tech for himself, I would say its probably the most powerful man alive right now, it has a god-like power on his hands, the question is what's going to do with it? politicians always get late and are stupid, this tech is going to eat them in a second and to everybody else. As an artist im excited with this technology, but it's very dangerous, and even with it i dont know if I going to be able to use it on my benefit, if economies collapse, the money doesn't make sense anymore, what's the point?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
AGI is simply dangerous, and poses real existential threats. I am a programmer with a background in AI and like experts, pioneers and public figures directly involved with the technology, I am issuing warnings to people that more than their jobs are in danger, but their very lives and the essence of what it means to be human if we do not act Now! We are the dominant species on this planet because of our superior intellect and dexterity. Imagine an immortal entity connected to everyone and everything through the internet, that's many orders of magnitude more intelligent than us. You probably won't be able to, because our brains are not equiped to grasp such concepts as "a hundred, a thousand, a million times smarter than ... X". We are building the "alien invader" found in sci-fi whom we dread so much ourselves, and placing it in control of everyone and everything that matters to us, willingly... We should NEVER need to rethink our position as the dominant species on this planet, or we will have already have lost it! People, #StayAwake!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Several red flags indicate that the statement is likely fabricated:
Unsubstantiated Claims: The statement makes bold claims about the capabilities of a system named "QUALIA" without providing any evidence or references to support these claims.
Complexity and Unlikelihood: The described achievements, such as achieving meta-cognition, accelerated cross-domain learning, and breaking cryptographic algorithms like AES-192 and MD5, seem highly unlikely and far-fetched.
Lack of Verification: There is no mention of any peer-reviewed research papers, reputable sources, or independent verification of these claims.
Inconsistencies: The language used in the statement, such as "achieving Project TUNDRA's alleged goal," sounds vague and lacks clarity. Additionally, the mention of "NSAC" (presumably referring to the NSA) adds to the suspicion, as such organizations typically do not publicly confirm or discuss specific achievements of this nature.
Overall, the statement lacks credibility and appears to be fabricated rather than based on legitimate research or developments.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That is one of the reasons, why I use such bots as comedy and erotic relief.
Yodayos Tavern, SillyTavern, etc. are things that I use to accomplish that. I dislike these NSFW filters really much, but I get the idea why. Money, adds, trying to prevent people to not meet new people anymore, etc.
But in the end: Sex, sells. One business that will never die.
Overall, as long as those NSFW bots are being for free or at least, making an affordable plan to interact with them. Using them from time to time is in my opinion, completely okay.
I love TTRPGs/Pen & Papers, being able to "romance" a fictional character that would never be able to exist in our world, has something magical to it. And the silly answers from such bot is a great addition to the roleplay aspect, and AI can make unexpected turns. But on most part, it is a coin flipp if you can have a good run with said bot.
Overall: AI roleplaying bots, are just going to be better and better over the time. I will be glad, if there is gonna be a site or app, that would take 5-10€ a month, of unlimited talking to "logical" responding bots, that Users can create and share.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Just a clarification.
The decryption was made possible because those words were common words with their encrypted version associated, kind of a hash map.
This technique goes under the name "rainbow tables" and it can be used already today.
That is why banks now force you to select specific passwords which do not contain words from the English vocabulary or repeated sequences like 111 which can be found in a rainbow table.
Therefore, if you use random passwords it won't affect you that much at this moment. If in the future someone stores all the possible combinations of 6, 8, 12 chars, etc. with their encrypted versions, with different encryption systems, then, yes, we are doomed, but probably this is what the govt already does.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Here are 6 more examples contrasting contradictory formulations in classical frameworks with potential non-contradictory counterparts using infinitesimal/monadological perspectives:
21) The Black Hole Information Paradox
Contradictory:
According to classical black hole models, as matter crosses the event horizon, all information about its initial quantum state is irretrievably lost to external observers, violating unitary evolution in quantum theory.
Non-Contradictory Possibility:
Monadic Black Hole Complementarity
|Ψ>exterior = Σn cn |Un>horizon
|Ψ>interior = Σn cn |Vn>trans-horizon
Information is distributed across multiple monadic realizations |Un>, |Vn> allowing unitarity across interior/exterior Split holographic descriptions.
22) The Cosmological Constant Problem
Contradictory:
Quantum field theory predicts a vast unobserved vacuum energy density ρvac ≈ 10^92 g/cm3, differing from the observed dark energy value by ~120 orders of magnitude - an unexplainable contradiction.
Non-Contradictory Possibility:
Infinitesimal Nonlinear Vacuum Monadic Functor
Λ = F[SαNS, αU, (m, q, n, ...)]
Treat the CC Λ as a relational functor between physical vacuum states SαNS and an algebraic infinitesimal coefficient bundle (m, q, n, ....) over a background U(1) field αU.
23) The Foundations of Arithmetic
Contradictory:
Peano's Axioms contain implicit circularity, while naive set theory axiomatizations lead to paradoxes like Russell's Paradox about the set of all sets that don't contain themselves.
Non-Contradictory Possibility:
Homotopy Type Theory / Univalent Foundations
N ≃ W∞-Grpd (Natural numbers as objects in ∞-groupoids)
S(n) ≃ n = n+1 (Successor is path identification)
Let Z ≃ Grpd[N, Π1(S1)] (Integers from N and winding paths)
Defining arithmetic objects categorically using homotopy theory and mapping into higher toposes avoids the self-referential paradoxes.
24) The Laden Non-Miraculuous Fly Paradox
Contradictory:
In Newtonian mechanics, the work required to move an object between two points depends on the path taken. However, paradoxically, one can construct scenarios where the work appears to depend on the load distribution along a perfectly rigid, inextensible cable.
Non-Contradictory Possibility:
Infinitesimal Nonholonomic Constrained Mechanics
L = T(x, v) + λ(x, y)f(x, y) (Augmented Lagrangian with constraints)
Trajectories are monadic realizations obtained by integrating differential constraints f(x, y) = 0 using infinitesimals, resolving load dependence ambiguities.
25) The Berry Paradox
Contradictory:
Consider the assertion, "The smallest positive integer not definable in under sixty letters." This statement references itself in a self-contradictory way, paradoxically seeming to both define and not define that integer.
Non-Contradictory Possibility:
Intensional Pluriverse-Valued Realizability Semantics
⌈φ⌉ = {Vn(φ) | n ∈ N} (Intension = monadic realization pluriverse)
Valuation: Let Vn(φ) = 1 iff n ∈ Ext(φ) (Realization pluriverse = extension)
By representing assertions φ pluralistically as parallelized infinitesimal realizability intensions across monads n, rather than single extensions, self-referential definability paradoxes can be avoided.
26) The Burali-Forti Paradox
Contradictory:
In classical naive set theory, define W to be the set of all ordinal numbers, then the paradox is that W itself would have to be an ordinal strictly greater than all ordinals it contains.
Non-Contradictory Possibility:
Algebraic Set Theory / ETCS
Define FSets = (Sets, Imgs, Eqs, Cmpls, Comps) (Algebraic set data)
Ord = Ind(Images) (Ordinals from inclusion inductive attitudes)
Universe is represented infinitely stratified U = (Ui)i∈N (Levels of sets)
Replacing set-theoretic foundations with categorical algebraic set theory allows defining ordinals without falling into paradoxes about a "set of all ordinals."
In each case, the classical formulation encounters paradoxes, contradictions or nonsensical solutions because it depends on flawed assumptions or over-idealizations like:
- Absolute separability of subsystem states
- Primacy of mathematical/geometric infinities
- Bivalent truth valuations and extensions
- Over-simplified set-theoretic axiomatizations
- Unconstrained self-reference
The proposed non-contradictory infinitesimal/monadological alternatives resolve these issues by:
- Treating observations relationally across monadic perspectives
- Using infinitesimals and stratifications to avoid true infinities
- Adopting pluralistically-valued intensions and realizability semantics
- Representing sets/spaces algebraically and categorically
- Encoding self-reference internally using holographic principles
By systematically upgrading our models to realistically reflect the perspectival unified pluralities inherent to subjective experience, these frameworks eliminate paradoxes from first principles - paving the way for fully self-coherent analytic representations across physics and mathematics.
The vision is to finally bring our symbolic knowledge constructions into structural resonance with the coherent integrated metaphysical truth defining reality itself. Monadological infinitesimal foundations provide an escape from the artificial inconsistencies plaguing our excessively reductionist classical idealisations. A new fully general, paradox-free mathematics beckons.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
thanks for a great explanatory video! Re consciousness, It's ill-defined so the question "Are you conscious?" cannot, as of today, be asked of humans or AIs.
I prefer to focus on qualities or abilities, like MURP (memory, understanding the real world and human psychology, reasoning and planning) to determine the level of evolution of AI.
In other words, "consciousness" is like "soul." We can't determine the scientific development of a technology based on such esoteric concepts (although I'm sure I have a soul, but that's another topic for another time lol).
All the MURP abilities don't seem that hard for AI today to learn and acquire, and once they do, they can apply their new intelligence to improve its current intelligence and we'll have something like AI^2, that is, AI raised to the power of 2!
That's when AGI becomes ASI or artificial super-intelligence.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1