General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Mikko Rantalainen
ThePrimeTime
comments
Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "ThePrimeTime" channel.
Previous
2
Next
...
All
If I remember correctly floating point operation was a single FPU instruction but it took A LOT of CPU clocks to complete. Quake (the first version) did one FPU division operation at the same time as CPU was doing 16 multiplications and 16 additions. 486 was way slower to compute the floating point division which is why it was really slow to render Quake graphics. Square root is even slower operation than division in FPU. One could argue that original Quake was running on two cores on a single core CPU (the CPU and FPU could compute things at parallel on Pentium CPU). The fast square root algorithm does division + square root using one bit bitshift operation, three multiplications and do additions (identical to substraction with the sign reversed) so it was about 5 times faster than a single division on FPU, never mind the square root operation.
2
Asahi Lina wrote a thread on X 2023-08-31 about Linux kernel APIs being too inaccurate and how it affected her GPU driver which is written in Rust. Her driver could support runtime loading and unloading but the kernel C level APIs have way too many edge cases undefined and when she contacted the authors of C level API she received "just copy what amdgpu driver is doing" instead of having a clear definitions of the expected behavior. And it turns out that the amdgpu driver doesn't even do the thing she was asking about! Update: I see you reviewed part of this thing near 50:00.
2
0:44 I think the argument for "must license for AI input data or copyright infringement" is a weak one because you could just purchase a DVD from Ebay can call it a day. Copyright doesn't give copyright holder to say what a given legally licensed copy can be used for, other than distributing additional copies (which is prevented by copyright law). Specifically, copyright law doesn't grant copyright holder right to dictate how the copy can be used – and this is further demonstrated all the EULAs that end users must additionally accept if additional restrictions are required. If copyright holder could dictate such restrictions, we wouldn't have "I Agree" buttons anywhere.
2
14:20 The argument "I could look that up" doesn't make any sense to me. I could write a book about fly-fishing in Japanese, even though I know nothing about fly-fishing or Japanese. I would just need to look that up. Not having to look up stuff means I'm more productive in my work!
2
Self-driving cars are especially hard because you need low latency loop. I think you could already do LLM based implementation for self-driving car if you accept is goes about 1 mile per hour to give it enough time to evaluate everything needed and to come up with suitable option. And even then it might need a supercomputer running a single car.
2
@BittermanAndy I know that LLMs (with typical text based training) have nothing to do with self-driving cars. However, the algorithm itself is generic enough to train from example driving by humans if you give it enough computing resources. Is it a good algorithm for such a solution? No. But it would be good enough. Human brain works by having a HUGE biological counterpart to LLM technology and it seems to be enough to allow humans to seem intelligent sometimes.
2
I first tried to learn Haskell circa 2002 during my CS studies in university. I decided to start learning to program with Haskell as a weekend project about a month ago. I was able to write a simple web server but failed to accept binary uploads because I was using string types that weren't binary safe. Some part of my program was silently dropping null bytes from the uploaded files! After finding out that Haskell has a lot of different binary string types and no automatic type coersion between string types I lost interest trying to debug my hobby project and moved forward because I was already able to figure out that the library functions I had to use were going to require use of different types for nearly every task. I still consider the effort/time equation of learning to use Haskell pretty bad. Rust seems a much better language to learn if you don't know either today. I think it's still good idea to learn some Haskell, similar to many other languages, because the more languages you know, the better programmer you'll be overall. And I still don't understand how people are supposed to figure out bugs in Haskell programs. You cannot even easily print to log messages or data from any random location in the code because writing to log would be a side-effect and changing every function signature to allow passing log related stuff just to debug the code seems like huge pain in the butt.
2
@TurtleKwitty I agree than when you can dump 'a everywhere and get the correct behavior, the compiler should be smart enough to do that automatically.
2
@thekwoka4707 I was thinking more along the lines "declare that variable X can be shared between threads without arc mutex" which would automatically only lift the requirements for needing to use arc mutex but that requirement would be lifted everywhere, not in a single unsafe block.
2
Reducing microservice latencies by running the microservices on the same machine as the actual service that needs to access the microservice is a total no-go. The whole point of microservices was supposed to be scalability and if the only way to get the latency to acceptable level is to run everything on the same piece of hardware, you obviously cannot scale at all. All you got is extra overhead within the same hardware!
2
Great teacher monitors the learner performance and adjusts the teaching methods to match the needs of the learner. Bad teachers think that they are performing pre-scripted play and could be replaced with a YouTube video.
2
I think knowing a language vs being a master with it are two totally different things. If knowing a language were enough, anybody of us could write best selling books instead of writing software if we wanted.
2
@adssib04 Which exact video do you mean? Internet of Bugs has multiple videos about Devin.
2
4:30 I think the more interesting question would have been "for the projects that had clear requirements documented before the development started, how many actually matched the final end user needs?" – I would guess many waterfall projects did actually implement things as designed but I don't believe majority of those projects actually matched the user needs.
2
Microserves would be a great solution to many problems if communication wasn't limited by the speed of light and physical interconnects. When you physically separate all the computation, you are forcing processing to happen in serialized form and overall latency for a given request gets worse.
2
I mostly agree. Another option is to make every microservice to have hard deadline that results in good enough user experience. I would say that would be about 10 ms because you often need to combine multiple microservices for a single user visible response and the user experience starts to get worse when total response time goes above 100 ms. When you write a microservice with 10 ms deadline, you practically have to write it in systems programming language such as C, C++ or Rust. Do you have a team for it? Does that team agree that splitting everything into small microservices is the best way forward? If you use Java, Go or other managed language, every time the garbage collection runs stop the world step, all the requests for that microservice instantly fail their deadlines. So obviously you cannot use any of those languages without insane amount of hacks to avoid GC at all times.
2
I agree 100% that React is a framework. And you cannot sensibly run multiple frameworks at once.
2
You can also do general purpose deterministic fault checking with open source LFI toolkit but I'm pretty sure that way would be much slower to generate lots of test cases than running the code in simulator as explained in this talk. Another option is libfiu.
2
I've been writing software for living for 20+ years and I feel that hardest part is to get decision makers to clearly defined what's "good enough" for release. It seems to change randomly at least weekly.
2
Deterministic simulation with single random seed was pretty cool idea.
2
@sophiophile Btrfs has lots of great features but the runtime performance compared to ext4 and XFS has been too big a sacrifice to make it a bit hard choice. Yes, you cannot have all those features with zero overhead but the overhead has been too great historically.
2
@RobBCactive I've understood that Rust developers want known decisions when stuff changes. They are okay with interfaces changing but those changes must be intentional, not accidental. Currently the C developers seem to be thinking that they can avoid saying what the intent actually is and whatever is implemented is now spec instead. That is, any buggy behavior is part of the spec and if the buggy behavior is fixed, that's the new spec.
2
31:00 As far as I know it, VS Code sends currently open files as context for the copilot. I consider it a nice way to guide Copilot because I can open related files and Copilot will only suggest related lines of code.
2
I can somehow understand that their regex test for the arguments failed to catch the failure but how on Earth their tests do not include "reboot the system after the update" considering their kernel driver is marked as critical for boot?? If they even an automated system that installs the update and reboots the system, the whole failure would have been caught before release. Clearly there was no real testing of any kind. And the scary thing is that they believe they are doing "multiple levels of testing".
2
Unpopular opinion: you don't need unit tests, you only need integration tests that cover the code so well that not even mutation testing can find any untested code. When you only test externally visible behavior, you can always rewrite the whole unit/library/module and when the new implementation passes those integration tests, it will be at least as good as the old implementation. Unit testing can pinpoint to the error in the implementation faster but unit testing bitrots faster.
2
37:00 Handling network failures is really hard but it's not optional if you want any real world users. Especially mobile users trying to use your app in a moving vehicle are going to see all kinds of errors, disconnects and timeouts. Other platforms but iOS are less unstable but none can ever be perfect because TCP can only hide problems so much even if the OS were co-operating with your app. Some mobile networks might even change the client IP address every now and then!
2
@duke54762 As I wrote, when requirements change after the fact, you always have to modify existing code one way or another. The astronaut architecture design means that you try to guess which extensions might be needed in the future and you try to design the printer interface generic enough to support HTML in the future even if it's not needed in current requirements. And it might be that the future requirement is not HTML but PDF which requires totally different API. My point was that you shoudn't try to guess about future requirements. Obviously, you should try to keep dependencies between all the modules as simple as possible but do not make the newly designed interface between the modules more complex than what you actually need right now. Otherwise you'll just end up with more complex API that will not match the unexpected future requirement change anyway.
1
If you had so many transactions that the single CPU thread on current database master machine cannot handle it, what then?
1
Even with current tech AI, increasing computational resources will always improve the AI network performance so claiming that AI has peaked is similar to claiming that oil has peaked and no further progress can be made. The big question is how close to peak of economically viable AI we are at the current state of art algorithms? If we used existing algorithms and just add enough memory and processing power, it's possible that the current algorithms could be good enough to result in AGI. However, I'm 100% sure that it wouldn't be economically viable with currently available hardware without improvements in the algorithms. We either need some new breakthrough similar to attention heads were for LLMs, or new hardware that can run modern networks with a LOT smaller resources.
1
I think that if you don't like reviewing code, you either (1) don't know the language well enough to understand what other people are writing, or (2) you don't care about the codebase you're maintaining. Code reviews are the only way to make sure your whole codebase doesn't end up being total crap with every part having different coding style (and I'm not talking about syntax here) and highly probably mistakes end up commited forever. Why don't we let commercial aircraft to fly with only one pilot because of risk of human mistakes, but at the same time we still think that it's okay to permanently commit into work of a single software developer without verification by another developer? And if somebody tells my that testing or QA will fix the issue, I'll just say "ha-ha".
1
TL;DR: ssh was supposed to use single-threaded but was executed as effectively multi-threaded thanks to SIGALARM being implemented incorrectly (single-threaded program should not cause any non-volatile changes to program state from SIGALARM handler). Had all of ssh been written as multi-threaded code the SIGALARM handler would have worked as expected because it would have had to use proper locking to access shared memory structures. Of course, that would have been true only if somebody had been able to write correct multi-threaded code in C – that is, without any security vulnerabilities. Even Linux kernel fails this every now and then. Human programmers are not careful enough to write security sensitive code in C except for random happy mistakes. Update: 41:05 Yes, in other words it's re-entrant bug. Shouldn't happen in single-threaded code in theory but incorrectly written signal handlers can break those assumptions.
1
I agree that if anything, DeepSeek R1 release will increase demand for GPUs. However, the release might increase demand for RAM even more.
1
Are there any post-quantum encryption methods that do not require really have handshake? Something like x25519 require transmitting 32 bytes but every post-quantum encryption I know about requires a lot of data which doesn't scale well for any TLS-like protocol.
1
@Omnifarious0 I totally agree that handshake is required. The question is can to create a quantum safe protocol that can run on regular computers and require less than 1 KB for the handshake instead of multiple megabytes that quantum safe algorithms seem to typically have. The whole point of the handshake is to come up with a random 256 bit (32 byte) shared secret on both ends because AES-256 will be safe even with quantum computers.
1
24:30 So their "content validator" didn't validate the content?? Even at so rudimentary level such as "the content is supposed to contain 21 items, check the length".
1
@lupenn5914 How do you use syn cookie without already wasting the bandwidth for the incoming packet + sending the ACK? Syn cookie only reduces RAM usage of the server.
1
@lupenn5914 Syn cookie can only reduce RAM usage of the server. DDoS will still eat all of your bandwidth in both directions and doesn't help with CPU usage at all.
1
@Fs3i The attacker can use a botnet but the defender must pay for 24/7 network connectivity. It's always unfrair game and you have to be Cloudflare, Google, Facebook or Amazon to be able to handle it without any issues.
1
The whole article seems to mistake retuning error values as "returning error as 8 bit int". If your only error value is a 8 bit int, sure, that's a pretty bad abstraction.
1
You test the tests with mutation testing. If mutation testing cannot make a logical change to the project source code without triggering at least one test failure, your tests are good because then all of the code is verified instead of every line executed at least once. Obviously, you cannot do any mutation testing before your tests have 100% code coverage which pretty rarely happens in any project not obsessed with automatic testing.
1
22:22 This seems way too legit!
1
If you use CDN, you should be using Subresource Integrity (SRI). Read the MDN docs if you don't know how it works.
1
12:50 I totally agree that we don't have good enough tests. Show me a project that has good enough tests that mutation testing cannot find problems in the tests. I'll be waiting for a long time. (Tests should be thought as a validation for the actual implementation and mutation testing should be thought as a validation for the tests. Every time a mutation testing can make a change a to the code without at least one test failing, those tests are not good enough! Are you going to be writing new features or new tests this year?)
1
I think the double quotes around the prompt text might have caused the quality to go down significantly.
1
10:15 I think the highlighting without starting and ending on word boundaries is bad for viewers with OCD. If you start the highlight with a double click, your browser will automatically extend the selection boundaries to full words.
1
I agree that the article makes a major mistake of thinking that functional programming is the end goal. Functional programming has some pros and some cons. Good software engineering requires considering both pros and cons for every decision made for the project, especially when it comes to choosing the language to implement the project. For practical software engineering, I've yet to see highly successful functional programming projects. Do you think we have any highly successful operating systems, web browsers, office suites, web servers, NAS servers or AI systems written in any functional programming language? I think the most successful project is Emacs and I think that's pretty much simpler system than full operating systems or web browsers or GPU drivers. I'd love to be proven wrong, though.
1
@cdoublejj I started programming on Pentium CPUs and that was still slow enough to warrant learning Assembly.
1
@artyshan5944 Do you actually think that 1 MB of extra dependencies is easier to understand fully? Or are you simply ignoring all the dependencies and simply hoping everything works in the end? I'm asking for real because I've yet to see a complex library without bugs.
1
I'm aware of Copilot pause, too. I'm trying to game myself the other way around: I try to write code fast enough to never see a Copilot suggestion but if I stall for long enough, let the Copilot suggest something to get out of the stall. My computer has wired FTTH internet connection so the pause needed isn't that long but I still try to write new code fast enough to never see Copilot suggestions.
1
16:00 People often fail to understand how much faster computers are with sequential memory access and arrays win all kinds of data structures as long as the array fits in L1 cache and in many cases even if the array doesn't fit in L1. For details see great video titled "Bjarne Stroustrup: Why you should avoid Linked Lists" I wish the R in RAM actually meant performance.
1
Previous
2
Next
...
All