Comments by "Anders Juel Jensen" (@andersjjensen) on "Asianometry" channel.

  1. 584
  2. 532
  3. 455
  4. 332
  5. 331
  6. 311
  7. 262
  8. 252
  9. 246
  10. 221
  11. 208
  12. 199
  13. 192
  14. 154
  15. 134
  16. 123
  17. 94
  18. 87
  19. 83
  20. 83
  21. 78
  22. 74
  23. 66
  24. 65
  25. 60
  26. 57
  27. 56
  28. 50
  29. 49
  30. 47
  31. On the other hand... those are exactly the kind of word-barfs you hear from people who do not get promoted due to personality issues. I happen to be "one of those people". But rather than bitch about it I've simply accepted that only certain kinds of people should be on my team, and that I should never be put in a position where I have to be "the considerate fatherly type" to make things go smoothly. I don't do smooth. I do effective. People on my team must naturally prefer extreme directness bordering on the outright blunt. "Yes" means "Yes", "No" means "No" and "This approach sucks" does NOT mean "You're a failure as a human being and I hate you now so please go and hang yourself". It means "This approach sucks. We must find another. Any ideas?". The upside is that my team is never late. We get frequent bonuses and extra days off, and our personel turnover is practically zero. People either ask to be transfered within the trial period of three months, or they stick around forever. And management has gotten increasingly good at figuring out who's a good candidate for my team. In return for never advancing I've gotten my own little kingdom. Upper management stays out of my hair, and I stay out of theirs. It's a win-win for everyone when not-at-all-people-persons acknowledge their shortcomings and just collect the other blunt/weird/awkward rejects and make a team out of them. And it can come as no surprise to anyone that in the tech fields there are quite a few who somehow traded social skills for technical prowess.
    46
  32. 41
  33. 38
  34. 36
  35. 36
  36. 36
  37. 35
  38. 34
  39. 34
  40. 34
  41. 33
  42. 33
  43. 33
  44. 32
  45. 30
  46. 29
  47. 28
  48. 28
  49. 26
  50. 25
  51. 25
  52. 24
  53. 24
  54. 24
  55. 24
  56. 23
  57. 22
  58. 22
  59. 22
  60. 21
  61. 21
  62. 20
  63. 20
  64. 20
  65. 19
  66. 19
  67. 18
  68. 17
  69. 17
  70. 17
  71. 16
  72. 16
  73. 16
  74. 16
  75. 16
  76. 16
  77. 16
  78. 15
  79. 15
  80. 14
  81. 14
  82. 14
  83. 14
  84. 13
  85. 13
  86. 13
  87. 13
  88. 12
  89. 12
  90. 12
  91. 12
  92. 11
  93. 11
  94. 11
  95. 11
  96. 11
  97. 11
  98. 11
  99. 11
  100. 11
  101. 10
  102. 10
  103. 10
  104. 10
  105. 10
  106. 10
  107. 10
  108. 10
  109. 9
  110. 9
  111. 9
  112. 9
  113. 9
  114. 9
  115. 9
  116. 9
  117. 9
  118. 9
  119. 9
  120. 8
  121. 8
  122. 8
  123. 8
  124. 8
  125. 8
  126. 8
  127. 8
  128. 8
  129. 8
  130. 7
  131. 7
  132. 7
  133. 7
  134. 7
  135. 7
  136. 7
  137. 7
  138. 7
  139. 7
  140. This is known and expected in the industry. And the usual way to get around it is that once people get good enough to work on the seriously important stuff they sign a period specific NDA in exchange for a golden handshake and a looong vacation when they leave. The lead times in the high tech fields is often close to five years. So just being cut out of the loop for 18 months before you're allowed to work in the exact same field for someone else pretty much takes the sting off. You'll have fallen enough behind, and you'll end up in the middle of a product development cycle when many things are already set in stone, or you'll end up at the beginning of a product development cycle with 18 months old info. My buddy is a sales engineer at a crane company. If he gets fired he will get 70% of his salary for 12 months (on top of his new salary), but cannot work with price projecting of crane projects. That doesn't mean he can't work on, say, designing a new hydraulic system for some specific type of crane. He just can't work for a competitor in a way that would put his former employer at a serious disadvantage. Being able to undercut every contract bid by precisely 0.5% (knowing the internal prices of the former place by heart) would not only be unfair to his former job, but also unfair to the customers who expect bids to be given "blind". Each company has to guess at the level of competition they think they are facing and make a competitive bid. Not doing so is considered price fixing. But in the above case it would effectively be price fixing for the customer and a practical monopoly for the new employer.
    7
  141. 7
  142. 7
  143. 7
  144. 7
  145. 7
  146. 7
  147. 6
  148. 6
  149. You're right, but the way you worded the second part it becomes hard for laymen to understand that when you say "charges" you're not referring the act of "charging up the battery" but rather referring to "the amount of Coulombs". So for anyone curious: Watt (rate of electricity consumption) = Joules/second ("energy chucks per time unit") Watthours (total electricity consumption) = Joules/second * 3600 seconds (so the seconds cancel out and you're left with 3600 Joules) However, Watt is also equal to Volts * Amps (sorry physics teachers for the laymanified notation). This means that when you have a, say, 3.7V battery rated for 1000mAh of capacity you just multiply the two to get it in mWh, which in this case is 3,700mWh. Then divide by 1000 to remove the "milli" part and you're left with 3.7Wh, or divide by 1000 again to get to the kWh you're used to from your electric bill. In this case 0.0037kWh. But here's the catch: What I just said is complete nonsense... Because a battery does not deliver it's rated voltage from 100% to 0% capacity. The voltage will decline as the battery discharges. This means that you will, in fact, not get 3.7Wh our of the example above. The battery will start at 3.7V but end at around 2.2V (I'm using my Vape battery as an example) before it's sufficiently "flat" to not be able to drive my "device". And that's the reason why battery capacity is measured in amp-hours (or milliamp-hours for small stuff), as the Watt-hour approach "is bogus" despite it looking more familiar. You can, however, go V * mAh * 3/4 and get a reasonable approximation for modern lithium batteries. "The constant" will change depending on the battery technology, but I've rambled on for long enough, so I'll spare you all for a lecture on the implications of a battery's internal resistance and how that directly relates to Ohm's Law.
    6
  150. 6
  151. 6
  152. 6
  153. 6
  154. 6
  155. 6
  156. 6
  157. 6
  158. 6
  159. 6
  160. 6
  161. 6
  162. 6
  163. 5
  164. 5
  165. 5
  166. 5
  167. 5
  168. 5
  169. 5
  170. 5
  171. 5
  172. 5
  173. 5
  174. 5
  175. 5
  176. 5
  177. 5
  178. 5
  179. 5
  180. 5
  181. 5
  182. 5
  183. 4
  184. 4
  185. 4
  186. 4
  187. 4
  188. 4
  189. 4
  190. 4
  191. 4
  192. 4
  193. 4
  194. 4
  195. 4
  196. 4
  197. 4
  198. 4
  199. 4
  200. 4
  201. 4
  202. 4
  203. 4
  204. 4
  205. 4
  206. 4
  207. 4
  208. 4
  209. 4
  210. 4
  211. 4
  212. 4
  213. 4
  214. 4
  215. 3
  216. 3
  217. 3
  218. 3
  219. 3
  220. 3
  221. 3
  222. 3
  223. 3
  224. 3
  225. 3
  226. 3
  227. 3
  228. Conversely there is also the saying "Every program has at least one bug. And every program can be optimized by at least one instruction. Hence every program can be optimized until it is only one instruction... that doesn't work!" :P I once had the displeasure of inheriting a program which had seen nearly a decade of optimization by the same guy. It was very very fast. And very very compact. But it had a periodic error in a corner case. And the new contract made that corner case... ugh.. kinda the main point of the next iteration. I shall spare you the agony which is real time radio wave analysis with inline modification. Suffice it to say that when you work in that field the government tend to visit often. So I was faced with a problem: stick to the contract or do what was right. I did what was right and rewrote the bloody thing from scratch but carried over the function optimizations that I could understand. I presented this to management as "heavy modification required to fulfil the new requirements" and cautioned them that, despite not being initially identified, a new complete formal verification was needed... I didn't sweat at all trying to say that with my best straight face. The project was delivered on time. The program had fewer bugs than before. The program is also nearly twice as large in lines of code and runs about 10% slower (which was still within spec). But most importantly: The guy who got my chair, when I got fed up with the military industrial complex, isn't fucked six ways from Sunday the next time a major revision is required, or a new use case uncovers a bug that wasn't originally caught. Because I wrote it for other humans to read (I happen to be rather forgetful myself, and that approach has saved my ass several times so I consider myself "the next guy" when I write code). TL;DR: Lazy blabber-mouth bloat-code is bad. But needlessly terse and overly optimized code is worse in the long run. Computers get faster, so hippo code runs like a zebra in 10 years. Code that might as well be written in Sanskrit ends up costing more money, induces more human frustration and causes more project delays.
    3
  229. 3
  230. 3
  231. 3
  232. 3
  233. 3
  234. 3
  235. 3
  236. 3
  237. 3
  238. 3
  239. 3
  240. 3
  241. 3
  242. 3
  243. 3
  244. 3
  245. 3
  246. 3
  247. 3
  248. 3
  249. 3
  250. 3
  251. 3
  252. 3
  253. 3
  254. 3
  255. 2
  256. 2
  257. 2
  258. 2
  259. 2
  260. 2
  261. 2
  262. 2
  263. 2
  264. 2
  265. 2
  266. 2
  267. 2
  268. 2
  269. 2
  270. 2
  271. 2
  272. 2
  273. 2
  274. 2
  275.  @chrimony  If a 66MHz CPU has an IPC (provided no memory bottle neck) which is 15x higher than that of the 1GHz, then they have performance parity. The Athlon XP 1500+ (1.3GHz) has a single threaded PassMark score of 251. The Ryzen 7 7800X3D (5GHz) has a single threaded score of 3757. Adjusted for clock speed advantage that's a score of 977 which means it has an IPC that is ~3.89x higher. Which is kinda funny, because it has a clock speed advantage factor of 3.85x. So if we aren't too pedantic we can say "Half the performance uplift is from clock speed and half is from IPC gains. In the ~15 years between them the ~15x total single threaded performance uplift doesn't track Moore's Law, but one is a single core, the other is an 8 core. For tasks that scale linearly with cores that's a 120x performance uplift which is pretty damn close to the 128x Moore's Law projected.... But the current line of AMD Ryzen goes up to 16 cores, so there is that. TL;DR: While clock speeds only gain 10-15% per node generation these days, the ~70% shrink each generation is gaining more and more on the clock speed in terms of generational gain. If we do this comparison again in 10 years 1/3 of the performance will be from clock speed and 2/3rds will be from IPC (it already is if we compare to a 1999/2000 era CPU, but I didn't have a verifiable online source handy for you). So I stand by the statement. Yes, it's an over simplification, but to understand CPU progress it's a vital concept to understand. "IPC, over time, is everything".
    2
  276. 2
  277. 2
  278. 2
  279. 2
  280. 2
  281. 2
  282. 2
  283. 2
  284. 2
  285. 2
  286. 2
  287. 2
  288. 2
  289. 2
  290. 2
  291. 2
  292. 2
  293. 2
  294. 2
  295. 2
  296.  @mattmexor2882  I've been in the game for over 30 years. There's a reason why the FTC is, among a laundry list of others, moving hard to block the Nvidia purchase of ARM. This has nothing to do with fanboy-ism. It's simply a matter of having observed their practices over two decades. I used to be quite happy with Nvidia... until they pushed a driver update that disabled the PhysX capability of their card if an ATI card was also present in the system (I was running one of each at the time). And PhysX is just one example of Nvidia trying to force out competition with incompatibility (it also happens to be a purchase case). They also used the patents they obtained from 3DFX and S3 to great effect. There's a reason why Nvidia is no longer supplying GPUs to Apple. That deal went sour. There's a reason why Tesla is no longer working with Nvidia. That deal went sour. There's a reason Nvidia only got to deliver the GPU for the original XBox.... So, how about a challenge? You provide me with all the dirt you can find on AMD (since you seem to think I'm an AMD fanboy). Hit me with all the bogus lawsuits, partner back stabbing, patent infringement, vendor-lockin practices, etc that you can find. But in return you watch this for a start, to get a glimpse of why I don't trust Nvidia one little bit: https://www.youtube.com/watch?v=H0L3OTZ13Os Jim (the author of the video) misses a few "glorious" Nvidia moments, but since I don't have sources handy for those I shall spare you any further allegations :P
    2
  297. 2
  298. 2
  299. 2
  300. 2
  301. 2
  302. 2
  303. 2
  304. 2
  305. 2
  306. 2
  307. 2
  308. 2
  309. 2
  310. 2
  311. 2
  312. 2
  313. 2
  314. 2
  315. 2
  316. 2
  317. 2
  318. 2
  319. 2
  320. 1
  321. 1
  322. 1
  323. 1
  324. 1
  325. 1
  326. 1
  327.  @aekue6491  I work for a sub contractor for a big international defence contractor. We have long since been briefed that porting existing GAA designs to BPD-GAA will be, and I quote, "a largely automated process for embedded memory and gate logic, but will require substantial consideration and planning ahead of time for analog circuits". Since we work almost exclusively in the boundary layer between analog and digital (such is the nature of real-time signal analysis and shaping) we are currently "a little bit freaked out" as we are in mid-stage design of a GAA based solution that would ideally be finalised and rolled out as BPD-GAA, as that offers vastly superior noise characteristics. However, we are only now starting to get the builtin points on what to account for early to facilitate a reasonably straight forward porting process. Everything is still tightly under NDA from "the big three" but from the gossip I hear the situation is largely identical everywhere: The EDA tools will a breeze for the logic folks (CPUs, GPUs, accelerators, PLCs, FPGAs, etc, etc) but us analog folks (memory controllers, radio spectrum technologies, PCIe/CXL, optic signal modulation, etc, etc) will be the whipping boys as usual. We generally only get good EDA automation and integration of a node once it is no longer relevant for us (aka, once it's mature and cheap enough to make bulk crap products on like wireless doorbells and fridges and what have you). I hope that satisfies your curiosity, as I can't really divulge anything that is more specific than this.
    1
  328. 1
  329. 1
  330. 1
  331. 1
  332. 1
  333. 1
  334. 1
  335. 1
  336. 1
  337. 1
  338. 1
  339. 1
  340. 1
  341. 1
  342. 1
  343. 1
  344. 1
  345. 1
  346. 1
  347. 1
  348. 1
  349. 1
  350. 1
  351. 1
  352. 1
  353. 1
  354. 1
  355. 1
  356. 1
  357. 1
  358. 1
  359. 1
  360. 1
  361. 1
  362. 1
  363. 1
  364. 1
  365. 1
  366. 1
  367. 1
  368. 1
  369. 1
  370. 1
  371. 1
  372. 1
  373. 1
  374. 1
  375. 1
  376. 1
  377. 1
  378. 1
  379. 1
  380. 1
  381. 1
  382. 1
  383. 1
  384. 1
  385. 1
  386. 1
  387. 1
  388. 1
  389. 1
  390. 1
  391. 1
  392. 1
  393. 1
  394. 1
  395. 1
  396. 1
  397. 1
  398. 1
  399. 1
  400. 1
  401. 1
  402. 1
  403. 1
  404. 1
  405. 1
  406. 1
  407. 1
  408. 1
  409. 1
  410. 1
  411. 1
  412. 1
  413. 1
  414. 1