Comments by "" (@diadetediotedio6918) on "Theo - t3․gg" channel.

  1. 83
  2. 72
  3. 56
  4. 54
  5. 53
  6. 46
  7. 27
  8. 24
  9. 22
  10. 21
  11. 14
  12. 14
  13. 11
  14. 11
  15. 9
  16. 8
  17. 8
  18. 8
  19. 8
  20. 8
  21. 7
  22. 7
  23. 7
  24. 6
  25. 6
  26. 6
  27. 6
  28. 6
  29. 5
  30. 5
  31. 5
  32. 5
  33. 4
  34. 4
  35. 4
  36. 4
  37. 4
  38. 3
  39. 3
  40. 3
  41.  @cemreomerayna463  1. I became angry because of your attitude towards my response, not because you offended me directly, nor because a "stupid technology". I'm just generally tired of AI apologists and people like the other guy in this comment that talk bs without any kind of appropriate reflection. But either way, sorry if my tone was not ideal for a discussion, I'm fine and this is all past now. 2. And what I'm saying, what I literally said in my comment, is that this is a fundamentally, computationally untractable problem. You understand what are the implications of this? The implications are that [it is not getting more reliable], or better, that the [reliability gains are marginal]. For one, reliability implies a grounded conscious compromise with the truthness of a sentence, you say someone is reliable when that person has a good amount of proven knowledge, has the right intentions (to seek truth) and has peers confirming that veracity, those conditions are generally reasonable to expect when we define reliability. Now, AI does not have 2 of these, it does not have true knowledge in any sense, it literally just spits tokens, it is [literally] how it works, you can make an AI say almost literally anything with the correct prompt, this is far from being possible to humans and is obviously a terrible sign for this prospect. It does not "understand", it spits the most probable token, it can be steered towards more "reliable" responses by reinforcement learning and other techniques (like dataset filtering or "grounding" with things like RAG and similars), but it is still fundamentally just spitting tokens in a specific order, there's no knowledge and it fails to suffice the condition (1). For (2), AI obviously does not have conscience, it also does not have any kind of known morality, it can just immitate and spit, it is extremely easy to understand why they can't by implication also not "compromise with the truth" nor "tell the truth" by any means imaginable, they are really just intrinsecally not reliable and that's the point. For coding the implications are exactly the same, coding language is not a "regular grammar", I don't know from where did you got that impression, most if not all mainstream programming languages are literally context-free grammars with specific context-sensitive aspects, even while structured (because they need to be parsed efficiently), they are obviously far from being as complex as natural language, but nowhere as simple as a "regular grammar". It is also the case that coding is extremely complex in and on itself, and even the best, most advanced, "reasoning" models make arguably extremely silly mistakes that you would expect from a complete amateur (like literally creating ficticious packages out of thin air, writing disfunctional code that does dangerous things like deleting what is not supposed to delete, and literally just having a basic to terrible understanding of coding patterns and expected solutions), I've used all models since even GPT-2 (obviously, it was unable to code almost anything but extremely short one liners that were terribly wrong almost all the time) to GPT-3 (terrible at coding, but was starting to enhance) up to 3.5 (way better, still terrible), 4 (mid at best, still very terrible), 4o (almost the same as 4, but a bit more precise), o1 ("reasons", but still commits the same basic mistakes I saw in 4o over time) to o3-mini-x (which is not that much better than o1). Those models are not more "reliable", they are better at making less obvious mistakes (which is arguably more dangerous, not less, as now you need to understand the semantics of the thing to catch those), they are making less brute mistakes, they are still making absolute copious amounts of silly problematic errors. Their "reliability" is getting marginally better with each new innovation, so what I'm saying is here. 3. This is not only false, but also a dangerous way of thinking in and on itself. See (2) for reasons why the reliability of humans is inherently less problematic, and truer, and even more: humans take responsibility of their actions, they are moral agents in the world, while AI agents are, again, just spitting words. If a human make a terrible fatal mistake, he would be fired or even sent to jail, he would have nightmares with his mistakes, a bot making mistakes is just a sociopath, cannot be held accountable, cannot feel anything, it's unpredictability is [absolutely dangerous], while humans have absolutely developed ways to deal with their uncertainty (ways that work, we literally delivered the man to the moon with a software so small that would be uncomparable with a hello world compiled in many of the most modern languages). Your response is unsuficient, and problematic.
    3
  42. 3
  43. 3
  44. 3
  45. 3
  46. 2
  47. 2
  48. 2
  49. 2
  50. 2
  51. 2
  52. 2
  53. 2
  54. 2
  55. 2
  56. 2
  57. 2
  58. 2
  59. 2
  60. 2
  61. 2
  62. 2
  63. 2
  64. 2
  65. 2
  66. 1
  67. 1
  68. 1
  69. 1
  70. 1
  71. 1
  72. 1
  73. 1
  74. 1
  75. 1
  76. 1
  77. 1
  78. 1
  79. 1
  80. 1
  81. 1
  82. 1
  83. 1
  84. 1
  85. 1
  86. 1
  87. 1
  88. 1
  89. 1
  90. 1
  91. 1
  92. 1
  93. 1
  94. 1
  95. 1
  96. 1
  97. 1
  98. 1
  99. 1
  100. 1
  101. 1
  102. 1
  103. 1
  104. 1
  105. 1
  106. 1
  107. 1
  108. 1
  109. 1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. 1
  121. 1
  122. 1
  123. 1
  124. 1
  125. 1
  126. 1
  127. 1
  128. 1
  129. 1
  130. 1
  131. 1
  132. 1
  133. 1
  134. 1