General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Immudzen
Continuous Delivery
comments
Comments by "Immudzen" (@Immudzen) on "AI Disruption u0026 Its Impact On Software Development Jobs" video.
We have also been on the precipice of self-driving cars for more than 10 years and progress has nearly stopped. It was easy to get to about 95% but getting beyond that has proven to be insanely hard. These LLMs are already hitting the same point. In the hands of a skilled coder they can assist but they don't understand. Most of the code they generate is wrong to various degrees and when an LLM is questioned about the code it is pretty clear it has no understanding of what is being asked. Remember LLMs are probability models where they predict the next most likely word. They don't actually understand the question being asked. I think they will get better as assistants but won't be able to replace. There are some companies that already tried to do all the work with LLMs and no coders and they failed. I think they will continue to fail.
13
@luke5100 I don't dispute it is a timesaver however at the same time it is also very stupid. It doesn't understand what it is doing and no amount of prompt writing changes that. There are quite a few research papers on this already. You are correct that the more narrowly the scope of something is the more likely it is to succeed but that also very clearly moves it into assistant and not something that can be used without programmers. I would say that on scientfic and engineering code if I have it write a 5-10 loc function it will usually get about 80% or so of it correct but you better know how to write that function yourself or you are never going to be able to correct it.
6
These tools are useful but you have to remember they are REALLY stupid. They have no understanding of what they are writing. They pass college level standardized exams but they fail at highschool and gradeschool exams. They can do certain common programming tasks but they routinely fail at anything less common and the reason for both is the same. These things heavily rely on the solution to the question being in the training data. It can mix and match things together but it doesn't understand. It solves all the problems on stack overflow BECAUSE they are on stack overflow. It can solve standardized exams because there are thousands of online study guides for them. If these models killed stack overflow the companies would have to save the data to continue to use it for training because they can't be trained without it. There have already been studies that if you train an LLM on the output of an LLM it gets dumber fairly rapidly because you feed too many mistakes back into the system. Most of coding is not very novel when it comes down to most of the individual functions and so these tools can help but they are not very good at coding larger pieces. I think we are going to rapidly hit a point where these models don't get much better than they are now. They will still be very useful and an be tuned for specific tasks but I don't think they will actually get much better. We have to come up with some fundamentally different kind of model. I will also note that this is common in AI models. Look at self-driving, that has basically stalled for close to 10 years because the remaining problems are so difficult to do. If you build your own classification or regression models you can see they quickly get close to right but that further improvements are incredibly difficult.
1