Comments by "" (@diadetediotedio6918) on "Theo - t3․gg" channel.

  1. 83
  2. 72
  3. 69
  4. 56
  5. 54
  6. 53
  7. 46
  8. 27
  9. 24
  10. 22
  11. 21
  12. 14
  13. 14
  14. 11
  15. 11
  16. 9
  17. 8
  18. 8
  19. 8
  20. 8
  21. 8
  22. 7
  23. 7
  24. 7
  25. 7
  26. 6
  27. 6
  28. 6
  29. 6
  30. 6
  31. 6
  32. 5
  33. 5
  34. 5
  35. 5
  36. 5
  37. 5
  38. 4
  39. 4
  40. 4
  41. 4
  42. 4
  43. 4
  44. 4
  45. 3
  46. 3
  47. 3
  48. 3
  49. 3
  50.  @cemreomerayna463  1. I became angry because of your attitude towards my response, not because you offended me directly, nor because a "stupid technology". I'm just generally tired of AI apologists and people like the other guy in this comment that talk bs without any kind of appropriate reflection. But either way, sorry if my tone was not ideal for a discussion, I'm fine and this is all past now. 2. And what I'm saying, what I literally said in my comment, is that this is a fundamentally, computationally untractable problem. You understand what are the implications of this? The implications are that [it is not getting more reliable], or better, that the [reliability gains are marginal]. For one, reliability implies a grounded conscious compromise with the truthness of a sentence, you say someone is reliable when that person has a good amount of proven knowledge, has the right intentions (to seek truth) and has peers confirming that veracity, those conditions are generally reasonable to expect when we define reliability. Now, AI does not have 2 of these, it does not have true knowledge in any sense, it literally just spits tokens, it is [literally] how it works, you can make an AI say almost literally anything with the correct prompt, this is far from being possible to humans and is obviously a terrible sign for this prospect. It does not "understand", it spits the most probable token, it can be steered towards more "reliable" responses by reinforcement learning and other techniques (like dataset filtering or "grounding" with things like RAG and similars), but it is still fundamentally just spitting tokens in a specific order, there's no knowledge and it fails to suffice the condition (1). For (2), AI obviously does not have conscience, it also does not have any kind of known morality, it can just immitate and spit, it is extremely easy to understand why they can't by implication also not "compromise with the truth" nor "tell the truth" by any means imaginable, they are really just intrinsecally not reliable and that's the point. For coding the implications are exactly the same, coding language is not a "regular grammar", I don't know from where did you got that impression, most if not all mainstream programming languages are literally context-free grammars with specific context-sensitive aspects, even while structured (because they need to be parsed efficiently), they are obviously far from being as complex as natural language, but nowhere as simple as a "regular grammar". It is also the case that coding is extremely complex in and on itself, and even the best, most advanced, "reasoning" models make arguably extremely silly mistakes that you would expect from a complete amateur (like literally creating ficticious packages out of thin air, writing disfunctional code that does dangerous things like deleting what is not supposed to delete, and literally just having a basic to terrible understanding of coding patterns and expected solutions), I've used all models since even GPT-2 (obviously, it was unable to code almost anything but extremely short one liners that were terribly wrong almost all the time) to GPT-3 (terrible at coding, but was starting to enhance) up to 3.5 (way better, still terrible), 4 (mid at best, still very terrible), 4o (almost the same as 4, but a bit more precise), o1 ("reasons", but still commits the same basic mistakes I saw in 4o over time) to o3-mini-x (which is not that much better than o1). Those models are not more "reliable", they are better at making less obvious mistakes (which is arguably more dangerous, not less, as now you need to understand the semantics of the thing to catch those), they are making less brute mistakes, they are still making absolute copious amounts of silly problematic errors. Their "reliability" is getting marginally better with each new innovation, so what I'm saying is here. 3. This is not only false, but also a dangerous way of thinking in and on itself. See (2) for reasons why the reliability of humans is inherently less problematic, and truer, and even more: humans take responsibility of their actions, they are moral agents in the world, while AI agents are, again, just spitting words. If a human make a terrible fatal mistake, he would be fired or even sent to jail, he would have nightmares with his mistakes, a bot making mistakes is just a sociopath, cannot be held accountable, cannot feel anything, it's unpredictability is [absolutely dangerous], while humans have absolutely developed ways to deal with their uncertainty (ways that work, we literally delivered the man to the moon with a software so small that would be uncomparable with a hello world compiled in many of the most modern languages). Your response is unsuficient, and problematic.
    3
  51. 3
  52. 3
  53. 3
  54. 3
  55. 2
  56. 2
  57. 2
  58. 2
  59. 2
  60. 2
  61. 2
  62. 2
  63. 2
  64. 2
  65. 2
  66. 2
  67. 2
  68. 2
  69. 2
  70. 2
  71. 2
  72. 2
  73. 2
  74. 2
  75. 2
  76. 2
  77. 2
  78. 2
  79. 2
  80. 2
  81. 2
  82. 1
  83. 1
  84. 1
  85. 1
  86. 1
  87. 1
  88. 1
  89. 1
  90. 1
  91. 1
  92. 1
  93. 1
  94. 1
  95. 1
  96. 1
  97. 1
  98. 1
  99. 1
  100. 1
  101. 1
  102. 1
  103. 1
  104. 1
  105. 1
  106. 1
  107. 1
  108. 1
  109. 1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. 1
  121. 1
  122. 1
  123. 1
  124. 1
  125. 1
  126. 1
  127. 1
  128. 1
  129. 1
  130. 1
  131. 1
  132. 1
  133. 1
  134. 1
  135. 1
  136. 1
  137. 1
  138. 1
  139. 1
  140. 1
  141. 1
  142. [[Parents can't effectively protect their kids from all the "bad" stuff from the internet. That's just a fact of life.]] They can protect them for the MOST part, and the rest is just unfortunate but also a fact of life, no government will solve this problem entirely as well. Parents duty is to TEACH their children values, they should not need to control their kids entire lifes, they should teach them how to avoid this themselves and why it's BAD. If you can't do it, it's a failure from your side, or at least it's something out of your control, you cannot solve all the world's problems. [[The government can (and will) provide an ID with a secure chip than can be scanned to extract only the information needed. Even better, that info can be a zero-knowledge proof that only says "you are >18 years old".]] The government can (and will) control every single aspect of your ENTIRE life if he want to, he will adopt mass surveillance in the name of peace and will kill all the opposition if it needs to "in the name of democracy". The government WILL NOT do anything to gather LESS information, they will do EVERYTHING to make sure they are the ones getting all the information and companies get less, so you can be that they will make "zero-knowledge proofs, but pero no mucho". [[The platform needs to comply with the law. No way around that.]] Sure as hell, just like companies needed to comply with the law in the 1940 regime, but this does not make it a good thing. [[The developers just have to use the zero-knowledge proof. No 3rd party identity provider is needed.]] "just have" is a very funny way to say that, if you knew the absurd difficulty which is providing a secure ZKP about personal identity without allowing pseudonyms, this will just not happen. [[No one wants that, not even the governments.]] You can bet your ass that they sure as hell want this, for them. [[I can 100% assure you that when the EU will implement age verification next year, they will use a privacy-first system, probably a double-anonymity system with ZKP.]] I can also assure you that when the next dictatorship forms in the EU, it will be even worse hell for the citizens of it than all the previous ones combined, because no one will know until it's too late.
    1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. 1
  158. 1
  159. 1
  160. 1
  161. 1
  162. 1