Comments by "WloCkuz" (@wlockuz4467) on "ThePrimeTime" channel.

  1. 1
  2. 1
  3. 1
  4. I am no scientist but I find it hard to make sense of what their solution is. AI is bottlenecked by compute and like with everything else with computers, optimising compute is a trade off. You use less compute at the cost of a better model, or at the cost of more compute you get a better model. I believe their latest experimental model cost something like $2k per task which is just nuts. Even if you have a magic optimization that cuts the compute in half without sacrificing quality, you're still paying as much as $1k per task. They'd have to bring it down to pennies or a few bucks in order to actually be profitable otherwise it's literally burning cash. Assuming OpenAI's current greatest model thats publically available is capable of 5% of collective human intelligence (that's being very generous because in reality its probably close to like 0.1% since its built from what's on the internet) and its already heavily bottlenecked on compute. So if you magically scale up to be 100% of human intelligence or even 50%, aka the buzzword Artificial Superintelligence or ASI, you would probably need more compute power to run it than anything has today. You could organise an orgy between AWS, Azure and GCP and still fall short. So unless we invent some new magic energy or build a dyson sphere to get us practically endless energy, I don't see how we can get better AI models than today without it being an economical and environmental crisis. Don't get me wrong, I am not an AI doomer. I love using it to speed up some of my workflows but I am not buying into this super intelligence non-sense.
    1
  5. 1
  6. 1
  7. 1
  8. 1
  9. 1
  10. 1
  11. 1
  12. 1
  13. 1
  14. 1
  15. 1
  16. 1
  17. 1
  18. 1
  19. 1
  20. 1
  21. 1
  22. 1
  23. 1
  24. 1
  25. 1
  26. 1
  27. 1
  28. 1
  29. 1
  30. 1
  31. 1
  32. 1
  33. 1
  34. 1
  35. 1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1