Comments by "clray123" (@clray123) on "Fireship"
channel.
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
@zainnatour4792 No, these models cannot be "easily" trained to do what good programmers do, which is essentially predicting the future and predicting human behaviors - the consequences of your decisions in regard to correctness, performance, user experience, handling of exceptional situations, productivity of future maintainers, total cost of ownership caused by a particular implementation, trending in popularity of programming languages/libraries/frameworks etc. etc. The best they can do is parrot code examples, but hey, for that purpose looking up stuff on StackOverflow is entirely sufficient, and chances are you will also get to see some intelligent discussion there, unlike from the "commentary" which the AI generates along with the copied code snippet. These models struggle and fail with basic logic tasks like adding numbers correctly (unless they cheat and resort to use a calculator), they are far from the sort of causal and diagnostic reasoning that is required for successful software dev.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2