Youtube comments of Vitaly L (@vitalyl1327).
-
131
-
116
-
112
-
110
-
78
-
69
-
65
-
57
-
55
-
55
-
54
-
53
-
53
-
52
-
51
-
50
-
39
-
38
-
36
-
35
-
33
-
31
-
30
-
30
-
29
-
29
-
28
-
28
-
28
-
27
-
26
-
24
-
24
-
23
-
23
-
21
-
21
-
20
-
19
-
18
-
17
-
17
-
17
-
17
-
17
-
17
-
16
-
16
-
16
-
16
-
16
-
15
-
15
-
15
-
15
-
15
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
12
-
12
-
12
-
12
-
12
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
@xelspeth Both C and Java are eager, imperative, structural, both having statements and expressions, both having very primitive (and very similar) type systems. Nothing fundamentally different. Compare that to, say, a lazy functional language, or to a total language, or even the eager languages of the ML family, or meta-languages with AST macros, or Prolog. That'd be worlds apart. But C and Java are almost the same on this scale.
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
@thebluriam these days most of the systems are very complex and contain multiple parts, some are software, some purely hardware, and there is very little tools available for simulating such systems. Try to find a decent mixed signal simulator that will simultaneously let you debug software running on an MCU and debug how an anaolog circuit will respond to this software behaviour, all in properly simulated time.
So, until we have such simulators, the only real way to debug such systems will be to run them physically, in real time, and then collect as much data as you can while they run - pass all the trace data through available pins if you have any, even blink LEDs and record slow-motion video (I did it a few times, was quite fun), use analog channels to log more data... What is not possible in such scenarios is to pause system at any moment you like and inspect it with a debugger.
And these are systems this world runs on - dozens to hundreds of MCUs in any modern car, MCUs running a lift in your building, MCUs in medical equipment in your hospital, etc.
It means, if we want to sustain the very foundations of our civilisation, we should not train programmers who might eventually end up supporting such systems with an emphasis on interactive debugging. Much better to teach everyone debugging the hard way, and only then tell them that there's such a thing as a debugger that can be handy if your system is not time-sensitive and if all the usual debugging methods failed.
Not the other way around. So, my point is, the hard methods should always be the default, interactive debugging as only a last resort. We'll have better developers this way.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@YMAS11
Languages can be classified on many dimensions, and choice of dimensions that matter is somewhat arbitrary.
One dimension is level of abstraction, it's the most well known classification but most people still get it wrong. On this axis, languages go from low level to high level, where low level is their operational semantics being close to some perceived view of the real hardware (talking about the real hardware makes no sense due to its massive complexity, so it's some abstract mental model, some RISC-y machine code).
From this extreme languages go to higher levels, with operational semantics being more and more steps removed from small step semantics of the machine code.
C, Java, Python - they are all very close to the low level side of this axis, as they have very explicit control flow, mostly explicit memory handling, explicit order of execution, and all use the same structural programming for expressing this low level control flow.
The higher you go on the levels of abstraction ladder, the less obvious control flow is, and it can become entirely undefined for very high level languages. They can have no tools for explicit control flow whatsoever. SQL or Datalog can be common examples of such.
Some languages allow to cheat and place themselves anywhere arbitrarily on this abstraction level axis. It's the meta-languages, with proper macro metaprogramming capabilities that allow you to add constructs with any, arbitrarily complex semantics to the language, turn the host language into any other language you can imagine. Rust belongs to this group - as it provides procedural macros that can turn the simple low-level Rust into, say, a very high level, optimised SQL.
Now, there are many other dimensions for classification, type systems among the most common ones. All of the common low level languages either use a very simple ad hoc type propagation and very loosely defined subtyping, or have entirely dynamic typing.
More complex type systems - including Hindley-Milner typing of the ML family and Miranda-Haskell-etc., Sytem F typing, dependent typing of Agda, Coq and alike - they all don't fit well into the low level, explicit control flow, structural programming model of the common languages.
Another dimension, which I decline to consider important, is the typical way the language is implemented. Natively compiled, natively compiled but with a complex runtime and managed memory, JIT-compiled with some intermediate representation (such as CLR or JVM), bytecode-interpreted such as Python or Perl - all such details are immaterial and it was shown many times how easily languages can be implemented on top of any of these models regardless of the other qualities of the language - see QuakeC, PyPy, multiple Java AOT implementations, etc.
As for algotrading - well, it exists, it makes money, it pays really well... What else can I say? I'm also grateful to it for driving higher end FPGA prices down due to growing demand.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
Главная причина всех этих "теорий" - тупо эго. Настоящая наука это реально сложно. Это надо жизнь посвятить изучению основ, чтобы через много-много лет упорного труда начать хотя бы поверхностно понимать, что происходит на переднем крае науки. У фриков интеллекта не хватает даже начать, не то что весь этот путь пройти. А эго болит, неприятно осознавать себя настолько ущербными и тупыми по сравнению с настоящими учеными, с теми, кто путь прошел полностью и посвятил десятки тысяч часов своей жизни тяжелой учебе. Вот и придумывают себе простые и неправильные объяснения всего, в которых они выходят гениями, а все ученые - мошенниками. Убогие. За это они и заслуживают издевательства и насмешки. И нам хорошо - можно с полным моральным правом издеваться над фриками, потому что кто ж еще это заслуживает, как не они?
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@juliesharp5077 also, just for you yo gauge your own ability: you asked if this vaccine is based on mRNA. What does it tell everyone about you? That you failed to go to pubmed and read any of the reports, to find out that it is based on a protein fragment in a very standard and decades old adjuvant, with the protein itself being produced by a genetically modified yeast - just like, say, insulin, and a lot of other essential proteins, enzymes and such. Yet, you made up your mind long before you even had a chance to check the facts. Now, you're defering to an imaginable authority of the crackpot "scientists" who appeal to your biases, again, without even trying to gain any bit of a knowledge of your own. And then you cry "arrogance" and act surprised when treated with an utter contempt. Funny.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Of course it exists. You really lack imagination if you think all people are so low and primitive that they may not have a dream job. Imagine wanting to be, say, an astronaut. Quite a legit dream job, right? You cannot be a freelance astronaut. And, no, only very primitive and worthless people want a job for a better income. And you cannot be a freelance neurosurgeon too, for those who want to save lives and see it as their calling in life it's a dream job. And, no, you have no freaking idea of what people of medical professions do. You cannot be a freelance nuclear physicist, a freelance microbiologist, etc.
People who say there is no such a thing as "dream job", who dream of having no job and simply exist, converting food into dung, are empty and primitive and deserve nothing but contempt. You really lack imagination and understanding of how this world works. Yes, right, you can play tunes on a guitar on your own, without it being your job. You cannot research metabolic pathways on your own. You cannot run experiments on LHC on your own. You cannot build an aircraft turbine all on your own in your spare time.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@lukivan8 not in a slightest. If someone "can" do surgeons work, but don't have a licence, don't have obligations to a regulating body, don't have all the necessary personal responsibility safeguards, he's not a surgeon. He's an educated dilettante. Same goes for the engineers. It does not matter how hard you self-studied, how high your IQ is, but nobody will ever let you design a bridge if you're not a real engineer, with obligations to regulating bodies, with personal responsibility, etc.
Now, same goes for software engineers too. I am not going to believe you if you'll claim you, personally, never were a victim of rogue software engineers. That you never suffered from using bad or outright dysfunctional software. You did. Everyone did. And nobody should have suffered, if only this field was properly safeguarded, like medicine and other engineering fields. It's a high time we kick out all the "self-taught" who cannot pass certification, that we introduce governing bodies, that we introduce personal responsibility for any consequences of the bad decisions made by software engineers.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@chakritlikitkhajorn8730 language is defined by its purpose. A language that fits a purpose, which have a nice, clean semantics that reflects the purpose, the problem domain as closely as possible is a well designed language. In this regard, Brainfuck is perfect, and not just that, it's beautiful.
While JS serves no purpose. It was not derived in any way from its supposed problem domain. It's inconsistent, its semantics is even hard to define (it shows that the language was just jotted in a couple of days, and no spec was written before the implementation started).
If a PL theorist tried to build a language for the niche occupied by JS, it'd be a very different language. Most likely, something similar to Scheme. If a PL theorist had to create an accessible language with a dense encoding for a 4kb RAM, 8080 CPU device, they're likely to reinvent Basic, given that Forth is deemed too alien to the target audience and teaching purposes.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Какая омерзительная, вредительская чушь. "Программисты" без знания математики - опасны и разрушительны, им в индустрии не место. Программист, не использующий математику в *каждой*, абсолютно каждой решаемой задачи - просто подлец. Именно такие "программисты" виноваты в скандале с Post Office, и десятках подобных (но менее громких).
Автор явно вообще не понимает, что такое математика. Для него математика - это если там числа. Але, дятел, а теория графов тебе не математика? Логика тебе не математика? Форматльная семантика, теория доказательств, теория множеств, грамматики и все такое - не математика?
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@SillySussySally this is exactly what I said - GPUs issue instructions from another threads (in NVidia parlance, warps), while OoO CPUs issue instructions from the same thread that it knows do not have a dependency on anything that's currently being stalled.
So, yes, OoO CPUs have higher latency. Simpler CPUs (such as ones you'll find in microcontrollers) have a much lower latency, and, more importantly, predictable latency. GPUs have lower latency (in terms of cycle count, not time - they run on lower clock frequency normally) just by a virtue of being much simpler cores and featuring shorter and simpler pipelines.
Keep in mind that the exact NVidia microarchitecture is not a public knowledge, so we can only assume here. There are other GPU designs that are far more open and well documented though, so we can extrapolate that knowledge. I personally worked on two mobile GPU cores, ARM Mali 6xx and Broadcom VC5, both are wildly different from each other. Latencies in both were (in clock cycles) still smaller than in high performance Intel cores and high end ARM cores (but higher than in the in-order ARM cores).
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@AccordingtoNicole so, no arguments on an actual subject? How will you learn things that matter (as in, fundamental science) without going to a university? It may be ok to bail out half-way, but still, there's no other way. And, no, it's not about "feeling important", it's about the very basic need for having at least some meaning in life, instead of being a passive consumer, which is, apparently, your main goal.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Что интересно, первые ассемблеры не имели мнемоник, и использовали более алгебраическую запись. Из нынешних ассемблеров на это разве что ассемблер Hexagon похож. За примерами рекомендую идти читать статьи Тьюринга с примерами на ассемблере.
И первые Лиспы ничего общего в с функциоанльным программированием не имели, кроме наличия рудиментарной лямбды. Да и современный Лисп (кроме Схемы и Кложы) тоже ни разу не функциональный.
У упомянутого в видео VT100 был режим 120 символов, и уже очень активно использовался syntax highlighting (bold, underscored, blinking). У VT50 да, максимум 80 символов и 11 строк. И upper case only.
Да, и критика goto несколько преувеличена. Рекомендую посмотреть, сколько раз goto используется, например, в ядре Linux.
2
-
2
-
@DemiImp I am talking about generic knowledge, transferable across platforms, which can only be gained by studying one platform (probably a toy one) thoroughly. Things like ABIs, registers and register spills, caches, cost of memory accesss, atomics, memory operations order, pipelines, effect of scheduling on IPC, alignment, SIMD/SPMD, and many more
2
-
2
-
2
-
2
-
2
-
2
-
@RotatingBuffalo
Now, I said "F*", not "F#", but you did not notice. "F*" is miles away from the run of the mill Hindley-Millner that F# does and Rust tries to do.
And let me remind you that we're talking about Rust here. Fairly mainstream language, with an ML-inspired type system, procedural macros, region analysis and a lot of other features from the "ivory tower" languages that you believe for no good reason to be impractical.
C and Java have very, very similar use cases. I worked on a high-frequency trading system that was largely written in Java. Eating C cake, evidently. I also worked on pure C HFT systems, ones written in C++, ones with large parts implemented in HDLs. There was no real difference in what C and Java did in those scenarios. Just plain, predictable imperative languages with more or less low level control of the memory layout. Nothing fancy. Everything very much the same. Not to mention java running on microcontrollers or even NFC chips (already mentioned JavaCard). For all practical purposes, Java is not too far from C. Yes, GC can and will cause troubles, yes, you need to code in a certain way for real-time, but same applies to C, your malloc() and free() are also not allowed.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Andrumen01 even if you always stick to the first implementation of the language and ignore all the others, there is still no reason to divide languages into "compiled" and "interpreted". Not to mention that the line between compiled and interpreted is very blurry. The said Python is compiling into a bytecode first, then executes this bytecode. Modern C compilers compile into some kind of IR (GIMPLE, LLVM IR, etc.), and then either further lower that IR to a target machine code, or not - it can be executed as is. Is there a significant difference? Nope.
What's really different between C and Python is how dynamic they are. C is almost fully static, while Python have far too many dynamic features - dynamic dispatch for everything, dynamic virtual tables, magic method, hash maps all over the place, all that stuff. You can compile Python into native code all you want, it'll still be slow and clumsy due to this level of dynamism. And you can stuff your C code with tons of run-time dynamic lookups and such, and your code will be slow and clumsy as if it's written in Python.
The more reasonable axis to divide languages into reasonable groups is static vs. dynamic, manual vs. managed, statically typed vs. dynamically typed (it's a different axis from static vs. dynamic in general), strongly typed vs. weakly typed. And then there are many meaningful sub-divisions - eager vs. lazy, statement-based vs. expression-based, purely imperative vs. can-be-functional vs. functional-first, and so on.
As for the web, I simply decline to touch it. Web UIs are ugly, user experience is very disappointing, and the whole stack of standards and technologies is over-engineered and extremely clumsy. There are much better ways, so I'll stick to them and wait until the web fad is finally over.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@TheBoing2001 your answer got deleted, apparently, for having URLs.
"Javascript is fine." - no, it is not. It's a crappy language devoid of any tools for creating higher level abstractions without suffering too high performance costs. Namely, no proper macros, and no proper control over code generation.
"With 3 other click you can C&P and run entire chess game" - mind you, a chess engine was one of the first programs Konrad Zuze wrote for his primitive computer in 1945. Hardly an impressive achievement in the 21st century.
Clearly you're a fanboy and you don't know anything at all about PLT, so you cannot see how deeply flawed JS is.
And, no, ease of deployment is not a virtue of JS the language. It did not facilitate this cross-platform deployment in any way. It's simply the result of web standards monopoly. If Tcl/Tk was such a standard, you could deploy a Tcl/Tk application everywhere equally effortlessly. So your argument makes zero sense.
Now go and learn some computer science, you're clearly lacking the most basic education.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@nitsujism You need to maintain FPS for a human user. It's not going to break the logic of you're running at 0.001FPS instead - the rest of the system will behave the same way, user himself is an understanding part.
For audio - again, it's quite easy to bag the data and to test on historic data, so real-time is not always required. Delay line is trivial to simulate.
As for logging, of course it'll be extremely dumb to use io steams, to use any kind of system calls, etc. All the proper logging frameworks use lock-free communication from real-time threads to non-realtime logger threads, with potentially quite large buffers accumulating the logged data. I'm in no way advocating for just sprinkling printfs here and there, it'd be barbaric.
As for segfaults, quite often simply running a debug build can make your segfault go away (or to mainfest somewhere else). Valgrind, or address sanitiser, or just a more elaborate logging would be far more precise in locating where exactly your data got corrupted (which can be very far away from where the segfault happened). Debuggers only delay finding the root cause of the problem by diverting your attention to unrelated code paths.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@daniilpintjuk4473 That's not always possible. Either your code is heavily concurrent, or you're wasting the hardware capacity. Having said that, yes, you must always do your best to avoid concurrency where it's not needed. E.g., a single FSM with predictable timing is better than an interrupt-driven system, as you can always reason about the worst case timing and make sure it meets the hard real-time requirements. No way to do it with concurrency present. Yet, there's a form of concurrency that's even harder to debug and that's pretty much unavoidable in any modern system - multiple communicating devices, e.g., a number of MCUs communicating, each can be perfectly single-threaded and sequential, but their interaction still only makes sense in real-time and cannot be paused.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@svchlife сейчас адекватных самоучек тоже нет. Я ж сказал, не встретил вообще ни одного такого за все 30 лет карьеры.
Да и доступный сейчас объем информации - это скорее минус, большая часть этой информации - мусор. И, главная проблема самоучек, у них огромные дыры в фундаментальных знаниях и при этом никакой возможности даже узнать о наличии этих дыр. Последствия несистемного изучения предмета.
Таких проблем нет у тех, кто изучил предмет самостоятельно уже после того, как они получили хорошее, системное образование в одной из STEM областей, желательно еще и с опытом практческой работы в этих областях. Любой хороший ученый или инженер уже знает, как учиться системно. Но их и самоучками назвать нельзя. Те, кто пришел с нуля, этого не знают и не умеют.
1
-
@Artem_Babenko тут и собственно программа обучения, и понимание того, как учиться в принципе. Человек, кто не прошел уже через серьезное высшее образование, не будет понимать, что учиться - больно, во многих моментах скучно, и что не из-под палки крайне сложно не пропускать болезненные и "скучные" моменты. Что надо не просто прочитать главу в учебнике, а сесть и скурпулезно выполнить все задачи после главы. Потом проверить решения и разобрать ошибки. Потом еще и вернуться к этой главе через пару недель. Что нельзя пропускать непонятное и браться за то, что интереснее. Что нельзя оценивать знания по критерию "пригодится - не пригодится", потому как не может студент с вершины своего крошечного опыта никогда это заранее оценить.
Ну и конечно же да, миллениалы и младше все практически не способны долго сосредотачиваться. ВУЗы это из них выбивают, с переменным успехом. Сами же, без дяди с палкой, они не научатся никогда. Из ИИ не очень-то хороший дядя с палкой получается, его слишком легко проигнорировать без последствий.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@TernaryM01 how does it defeat a purpose? OpenCL is an API for interaction between a host and a compute accelerator device, and a flexible language for programming different compute accelerator devices. Just that. Nobody ever promised you any "write once run everywhere", or anything like that. It was never a goal. Nobody ever asked for it. People who use compute accelerators know they must tune their code for every specific device, or even co-design hardware and software for the maximum efficiency.
Also, it's not even true for CUDA. You must tune your CUDA code for different NVidia GPU generations too.
And, no, there is a lot of areas where AMD GPGPU is far more efficient than NVidia. Integer arithmetic, for example. The commercial product I was talking about was built around AMD GPUs for this very reason - better memory performance for the access pattern we needed, and better integer performance (the entire pipeline was purely integer, no floating point whatsoever).
And do not forget about GPGPU on mobile devices (ARM Mali, VC5+, Adreno, etc.)
In scientific HPC it's not also all NVidia - FPGA accelerators are used widely where NVidia just cannot cut it, custom made ASIC accelerators, etc.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@nezu_cc I did a ton of different things in the last three decades. That included scientific compute (analysing experimental data from CERN), along with building those detectors and hardware/software triggers for them, working on specialised CAD engine for ship building, working on compilers for GPGPU along with GPU hardware design, working on video compression and low latency video communication, high-frequency trading, industrial robotics. People I worked with had, of course, different paths, they worked in areas from game development to embedded automotive / aerospace, medical robotics, and many many more areas. Dozens and dozes. The world does not revolve around web. Web is the least interesting domain in IT, yet for some weird reason every new developer gravitate towards it.
1
-
@VuxGameplays there is a whole huge world outside of web. System development - operating systems, compilers, DBMS engines, etc. Embedded development - real-time, control, safety, etc., telecomms, robotics, HPC, CAD/CAE, automation in general, and many many more exciting and fun areas.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@thes7450 web-schmeb, in my opinion this entire web development domain should have never existed, it's just so thoroughly broken. And specialisation exists for a good reason - it's the education system responsibility to produce fine-tuned specialists. Self-study plays a role too, surely, but again it should be done outside of the actual work environment. You cannot take, say, a web developer, put into a team of engineers working on the said ECG interface and expect him to learn all the mathematics required for DSP, to learn the control theory, to learn how to design robust fault-tolerant systems, etc.
SE graduate would have specialised in the final year, taking relevant courses to become a specialist in real-time control and sensor networks.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@mortiz20101 if you failed to comprehend what I'm talking about, you probably should not use LLMs.
1) Feedback loop: feed the result of testing the LLM output back into LLM, with all the relevant results (syntax errors, test failures, static code analysis output, etc.)
2) Critic: every time an LLM is producing an output, do it a few times with the same prompt, and then use another LLM prompt to criticise all the outputs and select the best one out of them.
3) Code sandbox: give LLM a tool to run arbitrary code in a safe sandbox. Use inference harnessing to ensure the tool is used immediately as the call appears in the output.
4) SMT, Prolog, etc. - LLMs cannot reason, obviously. But they can translate an informal problem into a formal language. Which can then be processed by an SMT solver, a Prolog interpreter, or whatever else you use as a reasoning tool.
You have a lot to learn. Do it. Or stay ignorant.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Dogth_xd you make very little sense. Does it a lack of formal higher education show, or simply being deliberately obtuse?
Software development is an engineering discipline, and programming IS mathematics. Period. If you don't recognise this as a fact, you simply know nothing about software engineering and programming. The fact that there is less opportunities to cause severe harm is irrelevant, and you're evidently underestimating the actual damage that undereducated code monkeys inflict on our society.
As for Kalashnikov, guess who worked for him? One little guy Hugo Schmeisser. Rings any bells?
You, uneducated people, are so far behind those who actually have a systematic knowledge that you simply have no mental capacity to comprehend how huge the chasm is, and how much you're missing, how incapable you are in comparison to anyone with a proper education. Sorry to break the news for you, I understand that you need some psychological comfort, some rationalisation for your ignorance, but I'm not in a generous mood and have no intention of feeding your rationalisation attempts.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I don't think school should teach any practical skills at all. This was never a goal of the education. School must (but rarely does) teach how to think, how to process information, how to systematise data, how to navigate in a library. The practical skills should be built on top of this base.
And, no, there is not nearly enough homework in schools. Homework does not teach the lack of boundaries. It teach the most important lesson, an ability to self-motivate and to work on your own, and, mind you, to work on your own betterment, not fulfilling some "boss" fancies.
Not to mention that I have nothing but utter contempt for those who babble they'll never need Pythagor's theorem in the "real life". And I think their "real life" is really, really sad and empty.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
oh, really? Any developer must master dozens of languages. Any capable developer must continuously create new small languages, because Language-Oriented Programming is by far the most powerful abstraction tool. A developer with only one language under their belt, especially such a crippled and ill-designed language as JS, is not a real developer.
Want to be "full stack"? Master the entire stack underneath your level of abstraction. Starting with analog and digital electronics, HDLs, all the necessary elements of computer architecture, all the PLT-related mathematics, PLT itself, a deep understanding of compilation and interpretation methods available, a deep understanding of how operating systems work (and when and where to use them or ditch them altogether). Understand networking, from PHY levels to high level protocols, and know where you should be on this stack of abstractions depending on your goals. There's a lot of critically important knowledge there. Javascript, browsers and all that meh are not among even the tangentially impactful pieces of knowledge.
1
-
1
-
1
-
1
-
1
-
@ApprendreSansNecessite for me, it sounds like a weird design, when similar data processing happens in different parts of the system. For data model - yes, it can be the same while flowing through the system, but it does not mandate the same language that handles the data in any way. Validation is normally just a part of the data model.
And things like data model, validation, etc., should be defined in a higher level (ideally, declarative) language anyway, and then translated into whatever language whatever part of the system is using. You're likely familiar with rudimentary forms of this approach employed in IDL and similar protocol / data model description languages. There's often dozens of languages involved, with a single data model in between them.
I admit I stay away from anything related to web, so may be unaware of some of the rationalisation behind common design choices in that world. From far away, the whole web stack looks like it's massively overengineered and badly designed on all levels though.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@NeoChromer look, kid, it is you who mumble nonsense here. You really think your interactive debugging is an efficient way to solve problems, and it is hilarious. I worked on scientific number crunching, large industrial CADs, on GPGPU compilers, hardware drivers, video compression, high-frequency trading, industrial robotics, database engines - i.e., all projects shapes and sizes. Almost never met a case where interactive debugging would have been more efficient. And then, some web code monkey jumps up and babbles that my approach is "nonsense" and what monkeys prefer to do is much better.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@xybersurfer I ask them how they'll debug a hypothetical problem (or a real one, if I have time to give them a real environment set up with some deliberately broken code). If they reach for a debugger as a first tool, I'm getting suspicious immediately. A few more questions usually prove I'm right, and they are cavalier, cowboy coders who avoid any systematic, slow, steady approaches.
If they're trying to add logging (or, if it's already there, to use it properly), if they're reaching for an address sanitiser, for Valgrind, or try to instrument the code (even with simple preprocessor macros, if setting up Clang toolchain is an overkill) - I'm pleased, this person clearly knows how to systematically narrow down problems and how to find optimal solutions.
Yes, debuggers can be useful if you need to inspect some black box code (or otherwise code that's impractical to instrument and modify in any way), which is often the case with third party dependencies. But it's again just a case against third party dependencies in general. Having to depend on an OS, a compiler and a system library is already too much (and yes, there were cases where it was better to avoid even these dependencies, running on a bare metal instead).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ 1) Able to solve engineering problems of any complexity efficiently. If problem can be solved at all, a competent engineer will find the solution in a reasonable time.
2) Must be able to understand and explain solutions, not just "it works, I copy-pasted it from somewhere and tested it". The implication - must not have any gaps in knowledge, including all the underlying fundamental knowledge.
3) Sort of obvious, but must use the right tool for the job. Must not be swayed by immaterial things like familiarity of a tool, as a competent developer must be able to learn any new tool in no time.
4) Related to the previous point, must always be data-driven. Solutions must be based on objective measurable criteria rather than beliefs, familiarity, popularity, etc.
5) Must be able to be productive in a team of equals, without disrupting their work and without losing productivity on impedance mismatch with the other developers.
These are the basic requirements, there are few more, but even these would cut off the vast majority.
1
-
@ A competent developer must conform to at least the following:
1) Must be able to solve engineering problems of any complexity efficiently. If problem can be solved at all, a competent engineer will find the solution in a reasonable time.
2) Must be able to understand and explain solutions, not just "it works, I copy-pasted it from somewhere and tested it". The implication - must not have any gaps in knowledge, including all the underlying fundamental knowledge.
3) Sort of obvious, but must use the right tool for the job. Must not be swayed by immaterial things like familiarity of a tool, as a competent developer must be able to learn any new tool in no time.
4) Related to the previous point, must always be data-driven. Solutions must be based on objective measurable criteria rather than beliefs, familiarity, popularity, etc.
5) Must be able to be productive in a team of equals, without disrupting their work and without losing productivity on impedance mismatch with the other developers.
1
-
@ youtube keeps deleting my answer. I'll try again:
1) Able to solve engineering problems of any complexity efficiently. If problem can be solved at all, a competent engineer will find the solution in a reasonable time.
2) Must be able to understand and explain solutions, not just "it works, I copy-pasted it from somewhere and tested it". The implication - must not have any gaps in knowledge, including all the underlying fundamental knowledge.
3) Sort of obvious, but must use the right tool for the job. Must not be swayed by immaterial things like familiarity of a tool, as a competent developer must be able to learn any new tool in no time.
4) Related to the previous point, must always be data-driven. Solutions must be based on objective measurable criteria rather than beliefs, familiarity, popularity, etc.
5) Must be able to be productive in a team of equals, without disrupting their work and without losing productivity on impedance mismatch with the other developers.
1
-
@drwhitewash youtube is very annoying, it keep deleting my answer
1) Able to solve engineering problems of any complexity efficiently. If problem can be solved at all, a competent engineer will find the solution in a reasonable time.
2) Must be able to understand and explain solutions, not just "it works, I copy-pasted it from somewhere and tested it". The implication - must not have any gaps in knowledge, including all the underlying fundamental knowledge.
3) Sort of obvious, but must use the right tool for the job. Must not be swayed by immaterial things like familiarity of a tool, as a competent developer must be able to learn any new tool in no time.
4) Related to the previous point, must always be data-driven. Solutions must be based on objective measurable criteria rather than beliefs, familiarity, popularity, etc.
5) Must be able to be productive in a team of equals, without disrupting their work and without losing productivity on impedance mismatch with the other developers.
1
-
1
-
@ I'll try to write it one more time:
1) Able to solve engineering problems of any complexity efficiently. If problem can be solved at all, a competent engineer will find the solution in a reasonable time.
2) Must be able to understand and explain solutions, not just "it works, I copy-pasted it from somewhere and tested it". The implication - must not have any gaps in knowledge, including all the underlying fundamental knowledge.
3) Sort of obvious, but must use the right tool for the job. Must not be swayed by immaterial things like familiarity of a tool, as a competent developer must be able to learn any new tool in no time.
4) Related to the previous point, must always be data-driven. Solutions must be based on objective measurable criteria rather than beliefs, familiarity, popularity, etc.
5) Must be able to be productive in a team of equals, without disrupting their work and without losing productivity on impedance mismatch with the other developers.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@samuelmorkbednarzkepler beyond a certain level, a mental maturity is pretty much a requirement for moving forward, so I'd posit that people exhibiting a degree of immaturity are indeed stuck in development, and cannot progress further into a more narrow specialisation and higher degrees of mastery of their domain.
I don't think software development is any different. Just like a marine biologist will be likely clueless in ornithology or a myrmecology, an embedded developer may be lost in, say, HPC or game development. On the other hand, the diversity of the development world is mostly an illusion, as there was nothing really new in the last few decades, every shiny "new" concept, framework or methodology is just a rehashing of something that was already tried and likely rightfully buried years ago. So, again, developers who get dazzled by this illusionary diversity of development disciplines are indeed not experienced enough to notice that there's nothing really new out there, and a lot of it is just a perversion of already known things.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@skyhappy first, let me reassure you, that you have no faintest understanding of the real world. It's pathetic, frankly, how code monkeys always run around rambling about the "real world", why they don't even possess a mental capacity to even start understanding it.
Secondly, no, computer science is the most important science out there, binding all the other sciences together. It's the science of what computation is, and our universe is built on the notion of computation on many levels. You won't understand it, of course, I'm saying it here not for your benefit. I'm a nuclear physicist who had to turn to computer science, because there were no answers to my questions anywhere else. And only computer science could finally make all the pieces of the puzzle click in.
And computer scientists are supposed to explore this branch of knowledge. Not write some pathetic crappy code, like your kind does.
And I'm so glad I'll never have to work with primitive nobodies with overblown ego like you. That's another great feature of the current market, it's very picky, and it works exactly as intended, leaving your kind out of anything meaningful.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Dryblack1 you seem to have no idea whatsoever of what fundamental knowledge is. Yes, it is absolutely supposed to be gap-free, and if it is not, it is useless. You won't be able to reconstruct anything else from the first principles if your fundamental base is patchy. I feel this discussion is hopeless - you don't even understand the language I'm using, yet you still somehow believe you're an engineer.
Once again, the simple fundamental things I mentioned are unlikely to be taught directly. Yet you'd know them, dearly, closely, if you had a fundamental, gap-free base knowledge. You don't. You treat information as a pile of unrelated pieces. You believe you can just "look up" whatever you need at the moment. It's a very naive view, typical for the uneducated people who have no idea of what knowledge really is.
Think of my examples again. You don't know any of the things I mentioned. And it means you're uneducated and you're not an engineer. You cannot just look them up - they must have been the very basis of your entire knowledge. You believe you know something (like, some programming languages, maybe), yet without the things I mentioned you don't really know anything. You have a pile of unrelated factoids.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@vincentvogelaar6015 apparently you do not understand how to use LLMs. They're not any different from our own minds - we cannot reason either, unless we use tools, such as formal logic. So give LLMs their tools - give them an ability to write down reasoning step by step, to verify the reasoning using formal methods (as in - make them write down the steps as HoL proofs or Prolog predicates). Give them a sandbox to debug the proofs, just like you do with any other code. Provide a critical loop to make sure they did not miss anything from the formulation of the problem when translating it to a proof.
I'm using LLMs for solving engineering problems, and reasoning is a crucial part of it. Even very small models (like Phi-3) are perfectly capable of reasoning on a level beyond the capacity of an average engineer, when given the right tools and proper sandbox to test the ideas in (akin to our imagination).
Also, LLMs perform the best when reasoning about things that were not in the training set. E.g., they write much better code in languages they've never seen - because they're forced to do it slowly, verifying every step, instead of churning out answers instinctively.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@jordixboy No, you can get a delusion that you "learned" something "FREELY and on your OWN". In the vast majority of cases those who fell to this delusion did not really learn anything. They memorised a few unrelated facts and tricks, and it left them with mythical thinking, not a systematic knowledge.
One can argue that you can learn medicine the same way, just go to a library and read all the books. Spoiler alert: you cannot. You'll get an assortment of unrelated facts and you'll fail to acquire a systematic knowledge.
In any domain, in any discipline one must be guided in order to get this system, to be able to link all pieces of knowledge together and to start producing new coherent knowledge. CS is not any different.
And I firmly believe that software engineering must also introduce the same very strict regulations as civil engineering, for the safety reasons. This world runs on software, and we hear about crap software written by uneducated monkeys wreaking havoc on a daily basis. All those personal data leaks, falling to crypto-lockers, the recent NATS debacle, and so on. We need to keep all the incapable ones out of this profession, and start regulating it rigorously.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ardnys35 lol, you're dim, aren't you? Do you understand what semantics is? Have you seen many languages outside of the typical code monkey selection?
And before you dare to bark that those languages are not practical, you're just ignorant and have no right to have opinions on topics above your cognitive capacity. Don't you dare to claim that, say, APL is not practical, when a lot of quant analysts in finance use its derivatives (like K) all the time. And whe you compare K to C amd C to Python or Java, you'll see that C, Python and Java are just one language with tiny immaterial differences, while K is worlds apart from them all.
This whole thread is infested with primitive ignorant code monkeys with hurt egos, who just got confronted with the fact they know absolutely nothing about the languages. So funny when code monkeys get defensive and protect their ignorance.
1
-
1
-
1
-
1
-
@5dollasubwayfootlong you're a funny java monkey. Funny and pathetic. I worked once on a single code baee (one very specialised and very expensive CAD) that contained code in Fortran, PL/I, C#, Perl, Tcl and half a dozen of own DSLs, all acccumulated for over 30 years. I work on code bases that contain a lot of K, C++ and System Verilog with a good amount of Haskell added. You know, monkey, the kind of jobs that pay really well, especially the HFT jobs. There is always a lot of languages involved. But you, patbetitc monkey, would not know, nobody would ever let you anywhere close to such code bases. All you can hope for is some boring low paid CRUD trash.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Many languages, some are good, some are mediocre, but none as awful as JS. I use C, C++, various flavours of Lisp (including Scheme), Tcl, Verilog, ocaml, and a few more, including even Fortran. Nothing is as ill designed as JS.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Консерватизм = саморазрушение. Действительно, кому нужны регуляторы, пусть монополисты сливают канализацию сразу в реки и озера, а если в питьевой воде немножеско листерии найдется, так пусть потребители голосуют рублем и переезжают туда, где вода лучше. Кому нужны требования резервов в банках пропорциональных риску активов? Если банки и посыпятся, благодарный налогоплательщик их спасет. И если какой-то магазин продает отравленную просрочку, то ведь потребитель должен проголосовать рублем, так? А если все будут продавать просрочку, то значит этого хочет Рынок, и потребитель должен молчать и радоваться. Консерватизм это круто!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@awmy3109 right, what a great argument! Since only JS is available in a web browser, JS is somehow now a great and performant language.
Simply for a virtue of monopoly. Nice.
But this is not what OP was talking about. This is how people with experience from outside of web see the web ghetto. Any time we take a closer look we recoil in disgust. This whole web thing is a pile upon piles of utter crap, and it should have never been like this.
Also, people like you are very often guilty of not even trying to think if they really need to build a "web app". Turns out, very often they should not, but they still do, because this is all they know. A lot of time a native application is a far better solution. Even more often, no UI at all is even better, yet, the people with such a severe professional deformation fail to see it.
Now, you claimed JS is somehow performant (we know it's not, not even close). You claimed you don't pay for abstractions in such a language. Wrong again, you pay dearly. Admit you're wrong and stop moving the goal posts.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@nitsujism games? real-time? Lol. There's nothing real-time there. Nothing bad will happen if you skip a frame or two. Nothing will crash if you pause rendering altogether and inspect. It's a soft real-time at most, nothing interesting. Audio processing - yes, can be hard real-time, but essentially it's just a stream in, stream out, no complex feedback, so you can just bag all the input and process it at any pace you like.
Now, try a proper hard real-time system. Like, automotive, or industrial control, where there's a physical component that won't wait for your code to catch up.
And, no, I have no pity for people who fix segfaults with debuggers. They're beyond salvation. They cannot even use memory sanitisers properly.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
If you're working with a low-level language and rely on debugger to catch your mistakes, I don't want you anywhere near any mission-critical code. Not in automotive, not in medical robotics, not in aerospace, not in industrial automation. Your way is guaranteed to introduce a lot more bugs than it's possible to tolerate.
Firstly, debuggers do not show you where the problem is. They let you see where the problem manifested. Especially if it's a memory-related problem, you can see its effects far away from the real cause. Then you'll find some hackish workaround and trot away happily, thinking you fixed the bug.
The real developers never rely on debuggers. We'd rather stick to MISRA-C harsh rules, use all possible static analysis tools we can find, build extensive testing infrastructure, build zero-overhead logging wherever it's possible. Debuggers will never replace any of this, and will never be of any real added value when you should do all the above anyway.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@OverG88 I'm a long time Chisel user, so no, I do know Scala, and, likely, I know it better than you do. There's one thing that kills Scala claim to be functional language - it runs on JVM, and there is no tail call optimisation guarantee. Scala compiler optimises statically resolvable tail recursion, but it does not optimise generic tail calls, therefore more complex recursion schemes are impossible to implement without running out of stack. Also, OCaml and F# are not "functional", they are functional-first, but also contain a ton of imperative features which are absolutely mandatory for them to be useful at all.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ivanf.8489 Only a very rare genius can learn from books entirely unguided. The missing part, something not available from the books, is the actual structure of learning, the order in which to learn things. And higher education does a good job in enforcing the right structure even when students are kicking and screaming that "we won't need all this irrelevant stuff".
For a knowledge to be absorbed into a comprehensive, gap-free system, it must be absorbed the right way, and so far nobody found how to teach this way other than in person. Scientific Method, of course, can be explained, but in order to learn to adopt it methodically and comprehensively one must be instructed through some long and counterintuitive steps.
Yes, self-taughts absolutely do skip the "boring" or seemingly irrelevant fundamentals - all the discrete mathematics, logic, philosophy, elementary physics, some basic calculus, etc. - but even if they did not, chances of absorbing it all into a proper systematic knowledge are low without a formal, structured instruction.
1
-
@ivanf.8489 My answer was removed too, I'll try to repeat it again:
Only a very rare genius can learn from books entirely unguided. The missing part, something not available from the books, is the actual structure of learning, the order in which to learn things. And higher education does a good job in enforcing the right structure even when students are kicking and screaming that "we won't need all this irrelevant stuff".
For a knowledge to be absorbed into a comprehensive, gap-free system, it must be absorbed the right way, and so far nobody found how to teach this way other than in person. Scientific Method, of course, can be explained, but in order to learn to adopt it methodically and comprehensively one must be instructed through some long and counterintuitive steps.
Yes, self-taughts absolutely do skip the "boring" or seemingly irrelevant fundamentals - all the discrete mathematics, logic, philosophy, elementary physics, some basic calculus, etc. - but even if they did not, chances of absorbing it all into a proper systematic knowledge are low without a formal, structured instruction.
1
-
1
-
@Frank00000 you're very obviously not a good engineer at all, and you've already provided a sufficient proof. Again, if you think that the overengineered crap like facebook is an exemplary engineering, you should not be allowed anywhere close to any real world engineering, never. You're clearly a self-taught, and it shows. Coding is the least important skill indeed, but communication is also not that important, especially communication with the lesser beings - and mind you, you're not even my subordinate, and you'll never pass the mark to become one, so you really should not expect a tiniest degree of politeness from me.
Once again, a self-taught cannot become an engineer, period. Maybe one in ten millions at most, some rare genius who managed to get all the fundamentals right without any guidance. You're not one in ten millions, clearly.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@conorstewart2214 of course, if you can use an MCU or an FPGA, you should use them. But if you need more compute power, then a beefier ARM or even an x86 is unavoidable. In this case Linux is perfect - you can isolate one or more CPU cores, set nohz_full for them, disable RCU interrupts on those cores, and then any Linux process running on such a core exclusively, avoiding any system calls (i.e., no context switches) will be suitable for real-time, the only variable timing will be memory access then.
Even better if you're using a heterogenous SOC (such as Xilinx UltraScale+) - you'll get a real-time MCU there, an FPGA, and Linux-capable ARM cores, so you can easily mix and match hard real-time and soft real-time and not-real-time-at-all loads on the same device, all driven conveniently by Linux.
1
-
1
-
@conorstewart2214 MCUs are also not ideal for real-time (even Cortex-R line) - thanks to sequential execution and interrupts that many peripherals insist on using.
A Linux process running on an overpowered device, with zero interrupts, solely using DMA for communicating with devices, will have better results than an MCU trying to do more than one thing.
Same reason why MPSoCs are so useful, you do not waste tiime on communication and communication between system components is guaranteed to be real-time (which is not the case with the usual suspects like CAN).
Also, control in robotics is a real-time problem on all levels. Not just the hard real-time things that MCUs can do, such as FoC, but higher levels - gait control, if it's a legged robot, VSLAM for any mobile robot - it's all real-time, and most of the pipeline is too computationally intensive to run on even the beefiest MCU. Good luck doing computer vision processing for 4 stereo camera feeds on an MCU. So, you need a more general purpose compute device, and, as a consequence, an OS running on it. And this is my point - Linux is perfect as such an OS, and there is no need to use specialised real-time OSes, thanks to the isolation I described previously.
Also note that the same approach is very common not only in robotics, but in the other hard real-time areas, including high-frequency trading.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1