Comments by "Guinness" (@GuinessOriginal) on "LegalEagle" channel.

  1. 37
  2. 30
  3. 14
  4. 12
  5. 8
  6. 5
  7. 4
  8. 4
  9. 3
  10. 3
  11. 3
  12. 2
  13. 2
  14. 2
  15. 2
  16. 2
  17. 2
  18. 2
  19. 1
  20. 1
  21. 1
  22. 1
  23. 1
  24. 1
  25. 1
  26. 1
  27. 1
  28. 1
  29. 1
  30. 1
  31. 1
  32. 1
  33. 1
  34. 1
  35. 1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1
  45. 1
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1
  51. 1
  52. 1
  53.  @TEverettReynolds it’s quite simple; they didn’t know how to use it properly. Think of it as a car. Not every car is the same, and some cars have different properties and functionality than others, some are suited to different terrains and environments etc. You wouldn’t try and plough a field with a Ferrari or enter a drag race with a monster truck. Unfortunately that’s exactly what they did. Every time you prompt AI, you’re deciding what car you want to drive, and how you’re going to drive it. The prompt that you give it is crucial for determining the outcome. If you don’t know what you’re doing, you can easily end up in the wrong car driving the wrong way. But it’s even more complex than that. Sometimes, before you get in your car, depending on what you want to do with it, you might change the set up - the suspension, the brakes, fine tune the engine, convert to electric, stuff like that. It’s the same with generative AI models. They’re designed to be creative and generate randomised text, based on probabilities. There are a number of settings that control this, that you can adjust to make the model more or less random and creative. In this case, they should have adjusted those settings to make it conservative and deterministic. They should have also designed their prompt to ensure that it knew what it was talking about, used the right problem solving technique and search algorithm and verified it’s answers. In other words, they should have chosen the right car for the job, set it up correctly and looked at a map, rather than just jumping in the first car they came across and assuming they knew the way. The AI model is designed to create text as per the prompt you give it. If you tell it to tell you about cases where such and such happened, it will tell you about said cases, whether they exist or not. If you only want real cases, you have to tell it that. If you don’t want it to be random and creative, you have to adjust the settings. If you want to ensure it’s giving you correct answers, you need to tell it to re evaluate and verify it’s answers. And finally, if it’s important, you need to check it. Just like Tesla cars insist you touch the wheel every 30 seconds, this AI also requires that the human in control has some idea of what they’re doing.
    1
  54. 1
  55. 1
  56. 1
  57. 1
  58. 1
  59. 1
  60. 1
  61. 1
  62. 1
  63. 1
  64. 1
  65. 1
  66. 1
  67. 1
  68. 1
  69. 1
  70. 1
  71. 1
  72. 1
  73. 1
  74. 1
  75. 1
  76. 1
  77. 1
  78. 1