Hearted Youtube comments on Asianometry (@Asianometry) channel.

  1. 60
  2. 60
  3. 60
  4. 59
  5. 59
  6. 59
  7. 58
  8. 58
  9. 58
  10. 58
  11. 58
  12. 57
  13. 57
  14. 57
  15. 57
  16. 56
  17. 56
  18. 56
  19. 56
  20. 56
  21. 55
  22. 55
  23. 55
  24. 55
  25. 55
  26. 55
  27. 54
  28. 54
  29. 54
  30. 54
  31. 53
  32. 53
  33. 53
  34. 53
  35. 53
  36. 53
  37. 53
  38. 53
  39. Hey long time fan of your channel. You touched on my field of interest so I thought to leave some notes. Notes from a Master Student AI: 1:22 You mention Semantic labeling, but don't actually show any semantic segmentation, which is the semantic labeling of each pixel. It is slightly different from instance segmentation (on the right) because with semantic segmentation you do actually need to assign a value to every pixel, not just the objects within the image. The most complete form is panoptic segmentation, which is both instances and classes annotated at every pixel. 1:47 Brand new dataset is not actually the standard, what commonly is tested is the generalization from one subset of a dataset (~80%) to another subset of the dataset (~20%). A brand new dataset falls under Out of Distribution (OOD) attacks and under the general term "robustness". 3:22 AlphaZero is a warped example of the concept you're trying to explain. Because AlphaZero has a perfectly working simulation to work with, and its predecessor also did a lot of self play. The improvement was in removing the need to pretrain (or prime) the model on humanplay. This was an advancement in reinforcement learning, not the simulation to reality gap. 4:59 You miss the full scope of difference in scale, the difference between 3 and 96 doesn't seem large (factor of 32) but what is actually significant is the amount parameters. That difference is between ~10,000 parameters for ALVINN and 170,000,000,000 (170 billion) Parameters for GPT-3 a factor of 17 million. Also check out DALL-E for the new multi-modal language and image comprehension and GATO for multimodel behavior learning. 5:20 A KMP 1500 robot isn't deployed with any machine learning behavior. It's all old-fashioned expert-made modelling. A better illustration would be robot arms grabbing complex and/or morphable objects. 6:11 Actually the problem isn't that 5000 isn't enough, it's an academic benchmark dataset. Big companies can get 500,000 images if they try a bit (probably even more). The problem is that is also not enough for self-driving cars. And it's not because 500.000 isn't enough to detect trees or pedestrians in the city on a clear day, but there needs to be an insane amount of diversity to detect trees, roads under any weather condition and circumstance. Not to mention 1 in a billion or 1 in a trillion edge cases. For the rest, the carla stuff is great! PS: for another mind-blow look up: "Neural Radiance Fields" and "instant NERFS".
    53
  40. 53
  41. 53
  42. 53
  43. 53
  44. 52
  45. 52
  46. 52
  47. 52
  48. 52
  49. 52
  50. 52