Comments by "Vitaly L" (@vitalyl1327) on "ThePrimeTime"
channel.
-
116
-
110
-
69
-
55
-
54
-
53
-
52
-
51
-
38
-
36
-
33
-
30
-
29
-
28
-
27
-
24
-
24
-
23
-
20
-
16
-
16
-
16
-
15
-
14
-
14
-
14
-
14
-
13
-
12
-
12
-
12
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
@xelspeth Both C and Java are eager, imperative, structural, both having statements and expressions, both having very primitive (and very similar) type systems. Nothing fundamentally different. Compare that to, say, a lazy functional language, or to a total language, or even the eager languages of the ML family, or meta-languages with AST macros, or Prolog. That'd be worlds apart. But C and Java are almost the same on this scale.
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
@thebluriam these days most of the systems are very complex and contain multiple parts, some are software, some purely hardware, and there is very little tools available for simulating such systems. Try to find a decent mixed signal simulator that will simultaneously let you debug software running on an MCU and debug how an anaolog circuit will respond to this software behaviour, all in properly simulated time.
So, until we have such simulators, the only real way to debug such systems will be to run them physically, in real time, and then collect as much data as you can while they run - pass all the trace data through available pins if you have any, even blink LEDs and record slow-motion video (I did it a few times, was quite fun), use analog channels to log more data... What is not possible in such scenarios is to pause system at any moment you like and inspect it with a debugger.
And these are systems this world runs on - dozens to hundreds of MCUs in any modern car, MCUs running a lift in your building, MCUs in medical equipment in your hospital, etc.
It means, if we want to sustain the very foundations of our civilisation, we should not train programmers who might eventually end up supporting such systems with an emphasis on interactive debugging. Much better to teach everyone debugging the hard way, and only then tell them that there's such a thing as a debugger that can be handy if your system is not time-sensitive and if all the usual debugging methods failed.
Not the other way around. So, my point is, the hard methods should always be the default, interactive debugging as only a last resort. We'll have better developers this way.
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@YMAS11
Languages can be classified on many dimensions, and choice of dimensions that matter is somewhat arbitrary.
One dimension is level of abstraction, it's the most well known classification but most people still get it wrong. On this axis, languages go from low level to high level, where low level is their operational semantics being close to some perceived view of the real hardware (talking about the real hardware makes no sense due to its massive complexity, so it's some abstract mental model, some RISC-y machine code).
From this extreme languages go to higher levels, with operational semantics being more and more steps removed from small step semantics of the machine code.
C, Java, Python - they are all very close to the low level side of this axis, as they have very explicit control flow, mostly explicit memory handling, explicit order of execution, and all use the same structural programming for expressing this low level control flow.
The higher you go on the levels of abstraction ladder, the less obvious control flow is, and it can become entirely undefined for very high level languages. They can have no tools for explicit control flow whatsoever. SQL or Datalog can be common examples of such.
Some languages allow to cheat and place themselves anywhere arbitrarily on this abstraction level axis. It's the meta-languages, with proper macro metaprogramming capabilities that allow you to add constructs with any, arbitrarily complex semantics to the language, turn the host language into any other language you can imagine. Rust belongs to this group - as it provides procedural macros that can turn the simple low-level Rust into, say, a very high level, optimised SQL.
Now, there are many other dimensions for classification, type systems among the most common ones. All of the common low level languages either use a very simple ad hoc type propagation and very loosely defined subtyping, or have entirely dynamic typing.
More complex type systems - including Hindley-Milner typing of the ML family and Miranda-Haskell-etc., Sytem F typing, dependent typing of Agda, Coq and alike - they all don't fit well into the low level, explicit control flow, structural programming model of the common languages.
Another dimension, which I decline to consider important, is the typical way the language is implemented. Natively compiled, natively compiled but with a complex runtime and managed memory, JIT-compiled with some intermediate representation (such as CLR or JVM), bytecode-interpreted such as Python or Perl - all such details are immaterial and it was shown many times how easily languages can be implemented on top of any of these models regardless of the other qualities of the language - see QuakeC, PyPy, multiple Java AOT implementations, etc.
As for algotrading - well, it exists, it makes money, it pays really well... What else can I say? I'm also grateful to it for driving higher end FPGA prices down due to growing demand.
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
@lukivan8 not in a slightest. If someone "can" do surgeons work, but don't have a licence, don't have obligations to a regulating body, don't have all the necessary personal responsibility safeguards, he's not a surgeon. He's an educated dilettante. Same goes for the engineers. It does not matter how hard you self-studied, how high your IQ is, but nobody will ever let you design a bridge if you're not a real engineer, with obligations to regulating bodies, with personal responsibility, etc.
Now, same goes for software engineers too. I am not going to believe you if you'll claim you, personally, never were a victim of rogue software engineers. That you never suffered from using bad or outright dysfunctional software. You did. Everyone did. And nobody should have suffered, if only this field was properly safeguarded, like medicine and other engineering fields. It's a high time we kick out all the "self-taught" who cannot pass certification, that we introduce governing bodies, that we introduce personal responsibility for any consequences of the bad decisions made by software engineers.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@DemiImp I am talking about generic knowledge, transferable across platforms, which can only be gained by studying one platform (probably a toy one) thoroughly. Things like ABIs, registers and register spills, caches, cost of memory accesss, atomics, memory operations order, pipelines, effect of scheduling on IPC, alignment, SIMD/SPMD, and many more
2
-
2
-
2
-
2
-
@RotatingBuffalo
Now, I said "F*", not "F#", but you did not notice. "F*" is miles away from the run of the mill Hindley-Millner that F# does and Rust tries to do.
And let me remind you that we're talking about Rust here. Fairly mainstream language, with an ML-inspired type system, procedural macros, region analysis and a lot of other features from the "ivory tower" languages that you believe for no good reason to be impractical.
C and Java have very, very similar use cases. I worked on a high-frequency trading system that was largely written in Java. Eating C cake, evidently. I also worked on pure C HFT systems, ones written in C++, ones with large parts implemented in HDLs. There was no real difference in what C and Java did in those scenarios. Just plain, predictable imperative languages with more or less low level control of the memory layout. Nothing fancy. Everything very much the same. Not to mention java running on microcontrollers or even NFC chips (already mentioned JavaCard). For all practical purposes, Java is not too far from C. Yes, GC can and will cause troubles, yes, you need to code in a certain way for real-time, but same applies to C, your malloc() and free() are also not allowed.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@nitsujism You need to maintain FPS for a human user. It's not going to break the logic of you're running at 0.001FPS instead - the rest of the system will behave the same way, user himself is an understanding part.
For audio - again, it's quite easy to bag the data and to test on historic data, so real-time is not always required. Delay line is trivial to simulate.
As for logging, of course it'll be extremely dumb to use io steams, to use any kind of system calls, etc. All the proper logging frameworks use lock-free communication from real-time threads to non-realtime logger threads, with potentially quite large buffers accumulating the logged data. I'm in no way advocating for just sprinkling printfs here and there, it'd be barbaric.
As for segfaults, quite often simply running a debug build can make your segfault go away (or to mainfest somewhere else). Valgrind, or address sanitiser, or just a more elaborate logging would be far more precise in locating where exactly your data got corrupted (which can be very far away from where the segfault happened). Debuggers only delay finding the root cause of the problem by diverting your attention to unrelated code paths.
2
-
2
-
2
-
2
-
2
-
@daniilpintjuk4473 That's not always possible. Either your code is heavily concurrent, or you're wasting the hardware capacity. Having said that, yes, you must always do your best to avoid concurrency where it's not needed. E.g., a single FSM with predictable timing is better than an interrupt-driven system, as you can always reason about the worst case timing and make sure it meets the hard real-time requirements. No way to do it with concurrency present. Yet, there's a form of concurrency that's even harder to debug and that's pretty much unavoidable in any modern system - multiple communicating devices, e.g., a number of MCUs communicating, each can be perfectly single-threaded and sequential, but their interaction still only makes sense in real-time and cannot be paused.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@nezu_cc I did a ton of different things in the last three decades. That included scientific compute (analysing experimental data from CERN), along with building those detectors and hardware/software triggers for them, working on specialised CAD engine for ship building, working on compilers for GPGPU along with GPU hardware design, working on video compression and low latency video communication, high-frequency trading, industrial robotics. People I worked with had, of course, different paths, they worked in areas from game development to embedded automotive / aerospace, medical robotics, and many many more areas. Dozens and dozes. The world does not revolve around web. Web is the least interesting domain in IT, yet for some weird reason every new developer gravitate towards it.
1
-
@VuxGameplays there is a whole huge world outside of web. System development - operating systems, compilers, DBMS engines, etc. Embedded development - real-time, control, safety, etc., telecomms, robotics, HPC, CAD/CAE, automation in general, and many many more exciting and fun areas.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@mortiz20101 if you failed to comprehend what I'm talking about, you probably should not use LLMs.
1) Feedback loop: feed the result of testing the LLM output back into LLM, with all the relevant results (syntax errors, test failures, static code analysis output, etc.)
2) Critic: every time an LLM is producing an output, do it a few times with the same prompt, and then use another LLM prompt to criticise all the outputs and select the best one out of them.
3) Code sandbox: give LLM a tool to run arbitrary code in a safe sandbox. Use inference harnessing to ensure the tool is used immediately as the call appears in the output.
4) SMT, Prolog, etc. - LLMs cannot reason, obviously. But they can translate an informal problem into a formal language. Which can then be processed by an SMT solver, a Prolog interpreter, or whatever else you use as a reasoning tool.
You have a lot to learn. Do it. Or stay ignorant.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Dogth_xd you make very little sense. Does it a lack of formal higher education show, or simply being deliberately obtuse?
Software development is an engineering discipline, and programming IS mathematics. Period. If you don't recognise this as a fact, you simply know nothing about software engineering and programming. The fact that there is less opportunities to cause severe harm is irrelevant, and you're evidently underestimating the actual damage that undereducated code monkeys inflict on our society.
As for Kalashnikov, guess who worked for him? One little guy Hugo Schmeisser. Rings any bells?
You, uneducated people, are so far behind those who actually have a systematic knowledge that you simply have no mental capacity to comprehend how huge the chasm is, and how much you're missing, how incapable you are in comparison to anyone with a proper education. Sorry to break the news for you, I understand that you need some psychological comfort, some rationalisation for your ignorance, but I'm not in a generous mood and have no intention of feeding your rationalisation attempts.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@NeoChromer look, kid, it is you who mumble nonsense here. You really think your interactive debugging is an efficient way to solve problems, and it is hilarious. I worked on scientific number crunching, large industrial CADs, on GPGPU compilers, hardware drivers, video compression, high-frequency trading, industrial robotics, database engines - i.e., all projects shapes and sizes. Almost never met a case where interactive debugging would have been more efficient. And then, some web code monkey jumps up and babbles that my approach is "nonsense" and what monkeys prefer to do is much better.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@xybersurfer I ask them how they'll debug a hypothetical problem (or a real one, if I have time to give them a real environment set up with some deliberately broken code). If they reach for a debugger as a first tool, I'm getting suspicious immediately. A few more questions usually prove I'm right, and they are cavalier, cowboy coders who avoid any systematic, slow, steady approaches.
If they're trying to add logging (or, if it's already there, to use it properly), if they're reaching for an address sanitiser, for Valgrind, or try to instrument the code (even with simple preprocessor macros, if setting up Clang toolchain is an overkill) - I'm pleased, this person clearly knows how to systematically narrow down problems and how to find optimal solutions.
Yes, debuggers can be useful if you need to inspect some black box code (or otherwise code that's impractical to instrument and modify in any way), which is often the case with third party dependencies. But it's again just a case against third party dependencies in general. Having to depend on an OS, a compiler and a system library is already too much (and yes, there were cases where it was better to avoid even these dependencies, running on a bare metal instead).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@samuelmorkbednarzkepler beyond a certain level, a mental maturity is pretty much a requirement for moving forward, so I'd posit that people exhibiting a degree of immaturity are indeed stuck in development, and cannot progress further into a more narrow specialisation and higher degrees of mastery of their domain.
I don't think software development is any different. Just like a marine biologist will be likely clueless in ornithology or a myrmecology, an embedded developer may be lost in, say, HPC or game development. On the other hand, the diversity of the development world is mostly an illusion, as there was nothing really new in the last few decades, every shiny "new" concept, framework or methodology is just a rehashing of something that was already tried and likely rightfully buried years ago. So, again, developers who get dazzled by this illusionary diversity of development disciplines are indeed not experienced enough to notice that there's nothing really new out there, and a lot of it is just a perversion of already known things.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Dryblack1 you seem to have no idea whatsoever of what fundamental knowledge is. Yes, it is absolutely supposed to be gap-free, and if it is not, it is useless. You won't be able to reconstruct anything else from the first principles if your fundamental base is patchy. I feel this discussion is hopeless - you don't even understand the language I'm using, yet you still somehow believe you're an engineer.
Once again, the simple fundamental things I mentioned are unlikely to be taught directly. Yet you'd know them, dearly, closely, if you had a fundamental, gap-free base knowledge. You don't. You treat information as a pile of unrelated pieces. You believe you can just "look up" whatever you need at the moment. It's a very naive view, typical for the uneducated people who have no idea of what knowledge really is.
Think of my examples again. You don't know any of the things I mentioned. And it means you're uneducated and you're not an engineer. You cannot just look them up - they must have been the very basis of your entire knowledge. You believe you know something (like, some programming languages, maybe), yet without the things I mentioned you don't really know anything. You have a pile of unrelated factoids.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@jordixboy No, you can get a delusion that you "learned" something "FREELY and on your OWN". In the vast majority of cases those who fell to this delusion did not really learn anything. They memorised a few unrelated facts and tricks, and it left them with mythical thinking, not a systematic knowledge.
One can argue that you can learn medicine the same way, just go to a library and read all the books. Spoiler alert: you cannot. You'll get an assortment of unrelated facts and you'll fail to acquire a systematic knowledge.
In any domain, in any discipline one must be guided in order to get this system, to be able to link all pieces of knowledge together and to start producing new coherent knowledge. CS is not any different.
And I firmly believe that software engineering must also introduce the same very strict regulations as civil engineering, for the safety reasons. This world runs on software, and we hear about crap software written by uneducated monkeys wreaking havoc on a daily basis. All those personal data leaks, falling to crypto-lockers, the recent NATS debacle, and so on. We need to keep all the incapable ones out of this profession, and start regulating it rigorously.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ardnys35 lol, you're dim, aren't you? Do you understand what semantics is? Have you seen many languages outside of the typical code monkey selection?
And before you dare to bark that those languages are not practical, you're just ignorant and have no right to have opinions on topics above your cognitive capacity. Don't you dare to claim that, say, APL is not practical, when a lot of quant analysts in finance use its derivatives (like K) all the time. And whe you compare K to C amd C to Python or Java, you'll see that C, Python and Java are just one language with tiny immaterial differences, while K is worlds apart from them all.
This whole thread is infested with primitive ignorant code monkeys with hurt egos, who just got confronted with the fact they know absolutely nothing about the languages. So funny when code monkeys get defensive and protect their ignorance.
1
-
1
-
1
-
1
-
@5dollasubwayfootlong you're a funny java monkey. Funny and pathetic. I worked once on a single code baee (one very specialised and very expensive CAD) that contained code in Fortran, PL/I, C#, Perl, Tcl and half a dozen of own DSLs, all acccumulated for over 30 years. I work on code bases that contain a lot of K, C++ and System Verilog with a good amount of Haskell added. You know, monkey, the kind of jobs that pay really well, especially the HFT jobs. There is always a lot of languages involved. But you, patbetitc monkey, would not know, nobody would ever let you anywhere close to such code bases. All you can hope for is some boring low paid CRUD trash.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@nitsujism games? real-time? Lol. There's nothing real-time there. Nothing bad will happen if you skip a frame or two. Nothing will crash if you pause rendering altogether and inspect. It's a soft real-time at most, nothing interesting. Audio processing - yes, can be hard real-time, but essentially it's just a stream in, stream out, no complex feedback, so you can just bag all the input and process it at any pace you like.
Now, try a proper hard real-time system. Like, automotive, or industrial control, where there's a physical component that won't wait for your code to catch up.
And, no, I have no pity for people who fix segfaults with debuggers. They're beyond salvation. They cannot even use memory sanitisers properly.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
If you're working with a low-level language and rely on debugger to catch your mistakes, I don't want you anywhere near any mission-critical code. Not in automotive, not in medical robotics, not in aerospace, not in industrial automation. Your way is guaranteed to introduce a lot more bugs than it's possible to tolerate.
Firstly, debuggers do not show you where the problem is. They let you see where the problem manifested. Especially if it's a memory-related problem, you can see its effects far away from the real cause. Then you'll find some hackish workaround and trot away happily, thinking you fixed the bug.
The real developers never rely on debuggers. We'd rather stick to MISRA-C harsh rules, use all possible static analysis tools we can find, build extensive testing infrastructure, build zero-overhead logging wherever it's possible. Debuggers will never replace any of this, and will never be of any real added value when you should do all the above anyway.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@OverG88 I'm a long time Chisel user, so no, I do know Scala, and, likely, I know it better than you do. There's one thing that kills Scala claim to be functional language - it runs on JVM, and there is no tail call optimisation guarantee. Scala compiler optimises statically resolvable tail recursion, but it does not optimise generic tail calls, therefore more complex recursion schemes are impossible to implement without running out of stack. Also, OCaml and F# are not "functional", they are functional-first, but also contain a ton of imperative features which are absolutely mandatory for them to be useful at all.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1