Youtube comments of Mikko Rantalainen (@MikkoRantalainen).

  1. 4300
  2. 560
  3. 517
  4. 410
  5. 365
  6. 351
  7. 311
  8. 285
  9. 268
  10. 249
  11. 244
  12. 232
  13. 226
  14. 225
  15. 171
  16. 169
  17. 129
  18. 116
  19. 112
  20. 105
  21. 105
  22. 102
  23. 96
  24. 95
  25. 95
  26. 94
  27. 92
  28. 90
  29. 89
  30. 87
  31. 87
  32. 86
  33. 83
  34. 82
  35. 81
  36. 78
  37. 78
  38. 77
  39. 76
  40. 76
  41. 76
  42. 75
  43. 74
  44. 72
  45. 71
  46. 71
  47. 70
  48. 69
  49. 66
  50. 66
  51. 64
  52. 62
  53. 60
  54. 58
  55. 58
  56. 56
  57. 55
  58. 55
  59. 55
  60. 54
  61. 53
  62. 52
  63. 49
  64. 49
  65. 48
  66. 48
  67. 48
  68. 47
  69. 47
  70. 47
  71. 46
  72. 46
  73. 44
  74. 43
  75. 42
  76. 42
  77. 40
  78. 39
  79. 37
  80. 37
  81. 36
  82. 36
  83. 36
  84. 36
  85. 35
  86. 35
  87. 35
  88. 35
  89. 34
  90. 34
  91. 33
  92. 33
  93. 33
  94. 33
  95. 33
  96. 32
  97. 32
  98. 31
  99. 31
  100. 31
  101. 31
  102. 30
  103. 29
  104. 29
  105. 29
  106. 29
  107. 29
  108. 28
  109. 28
  110. 28
  111. 27
  112. 27
  113. 27
  114. 27
  115. 27
  116. 27
  117. 27
  118. 26
  119. 26
  120. 26
  121. 26
  122. 25
  123. 25
  124. 25
  125. 24
  126. 24
  127. 24
  128. 23
  129. 23
  130. 23
  131. 23
  132. 23
  133. 22
  134. 22
  135. 22
  136. 22
  137. 22
  138. 22
  139. 22
  140. 22
  141. 22
  142. 22
  143. 22
  144. 22
  145. 22
  146. 21
  147. 21
  148. 21
  149. 21
  150. 20
  151. 20
  152. 20
  153. 20
  154. 19
  155. 19
  156. 19
  157. 19
  158. 19
  159. 19
  160. 19
  161. 18
  162. 18
  163. 18
  164. 18
  165. 18
  166. 18
  167. 18
  168. 17
  169. 17
  170. 17
  171. 17
  172. 17
  173. 17
  174. 17
  175. 17
  176. 17
  177. 17
  178. 17
  179. 16
  180. 16
  181. 16
  182. 16
  183. 16
  184. 16
  185. 16
  186. 16
  187. 16
  188. 16
  189. 16
  190. 15
  191. 15
  192. 15
  193. 15
  194. 15
  195. 15
  196. 15
  197. 15
  198. 15
  199. 15
  200. 14
  201. 14
  202. 14
  203. 14
  204. 14
  205. 14
  206. 14
  207. 14
  208. 14
  209. 14
  210. 13
  211. 13
  212. 13
  213. 13
  214. 13
  215. 13
  216. 13
  217. 13
  218. 13
  219. 13
  220. 13
  221. 13
  222. 13
  223. 12
  224. 12
  225. 12
  226. 12
  227. 12
  228. 12
  229. 12
  230. 12
  231. 12
  232. 12
  233. 12
  234. 12
  235. 12
  236. 12
  237. 12
  238. 12
  239. 12
  240. 12
  241. 11
  242. 11
  243. 11
  244. 11
  245. 11
  246. 11
  247. 11
  248. 11
  249. 11
  250. 11
  251. 11
  252. 11
  253. 11
  254. 11
  255. 11
  256. 11
  257. 10
  258. 10
  259. 10
  260. 10
  261. 10
  262. 10
  263. 10
  264. 10
  265. 10
  266. 10
  267. 10
  268. 10
  269. 10
  270. 10
  271. 10
  272. 10
  273. 10
  274. 10
  275. 10
  276. 10
  277. 10
  278. 10
  279. 10
  280. 10
  281. 10
  282. 10
  283. 10
  284. 9
  285. 9
  286. 9
  287. 9
  288. 9
  289. 9
  290. 9
  291. 9
  292. 9
  293. 9
  294. 9
  295. 9
  296. 9
  297. 9
  298. 9
  299. 9
  300. 9
  301. 9
  302. 9
  303. 9
  304. 9
  305. 9
  306. 9
  307. 9
  308. 9
  309. 9
  310. 9
  311. 8
  312. 8
  313. 8
  314. 8
  315. 8
  316. 8
  317. 8
  318. 8
  319. 8
  320. 8
  321. 8
  322. 8
  323. 8
  324. 8
  325. 8
  326. 8
  327. 8
  328. 8
  329. 8
  330. 8
  331. 8
  332. 8
  333. 8
  334. 8
  335. 8
  336. 8
  337. 8
  338. 8
  339. 8
  340. 8
  341. 8
  342. 8
  343. 8
  344. 8
  345. 8
  346. 8
  347. 8
  348. 7
  349. 7
  350. 7
  351. 7
  352. 7
  353. 7
  354. 7
  355. 7
  356. 7
  357. 7
  358. 7
  359. 7
  360. 7
  361. 7
  362. 7
  363. 7
  364. 7
  365. 7
  366. 7
  367. 7
  368. 7
  369. 7
  370. 7
  371. 7
  372. 7
  373. 7
  374. 7
  375. 7
  376. 7
  377. 7
  378. 7
  379. 7
  380. 7
  381. 7
  382. 7
  383. 7
  384. 7
  385. 7
  386. 7
  387. 7
  388. 7
  389. 7
  390. 7
  391. 7
  392. 7
  393. 7
  394. 7
  395. 7
  396. 7
  397. 7
  398. 7
  399. 7
  400. 7
  401. 7
  402. 7
  403. 6
  404. 6
  405. 6
  406. 6
  407. 6
  408. 6
  409. 6
  410. 6
  411. 6
  412. 6
  413. 6
  414. 6
  415. 6
  416. 6
  417. 6
  418. 6
  419. 6
  420. 6
  421. 6
  422. 6
  423. 6
  424. 6
  425. 6
  426. 6
  427. 6
  428. 6
  429. 6
  430. 6
  431. 6
  432. 6
  433. 6
  434. 6
  435. 6
  436. 6
  437. 6
  438. 6
  439. 6
  440. As a software engineer I think the only possible way forward is to make the required information available to all parties without limitations. If I were to decide, purchasing any hardware product would allow getting the schematics for free for that specific product. The information available in the schematics is already available to the owner of the hardware simply by scanning and probing the hardware so it's not like it's a trade secret and the manufacturer already has the data available in machine readable format because they were able to design and build the hardware so publishing it wouldn't cause extra cost to the manufacturer. Similarly, the firmware and all the required tools to flash the firmware should be freely available to hardware owners but the actual source code used to build the firmware could be kept secret as is usual with proprietary software. Again, the firmware can be extracted from the hardware so this wouldn't enforce manufacturers to disclose any secrets. In practice, it would be easier to publish the above mentioned data to public as whole instead of trying to publish it to hardware customers only. As for the spare parts, that's much harder part of the problem to enforce via legislation because of patents. Patents allow the patent owner to prevent the spare parts to be sold even if the spare parts could be manufactured by 3rd party suppliers. Maybe require that patented technology licensing must be included in the physical chips and specify in legislation that the 3rd party manufacturers are allowed to build copies of the chips as long as they pay the same amount of licensing fees as the OEM did? An alternative way would be to specify that licensing is tied to some specific part in the hardware (CPU, motherboard, case?) and replacing parts with spare parts do not require new license and 3rd party manufacturers are free to manufacture the spare parts without any license.
    6
  441. 6
  442. 6
  443. 6
  444. 6
  445. 6
  446. 6
  447. 6
  448. 6
  449. 6
  450. 6
  451. 6
  452. 6
  453. 6
  454. 6
  455. 6
  456. 6
  457. 6
  458. 6
  459. 6
  460. 6
  461. 6
  462. 6
  463. 6
  464. 6
  465. 6
  466. 6
  467. 6
  468. 6
  469. 6
  470. 6
  471. 6
  472. 6
  473. 6
  474. 6
  475. 5
  476. 5
  477. 5
  478.  @Kabivelrat  The material that emits the actual photons is made out of organic compounds (the letter "O" in OLED) and that's the wear item. Because each pixel has individual light source, each pixel will wear at different rate depending on what the display has been used over its whole lifetime. Running the display very bright causes individual pixels to be run with higher current which causes more wear to each pixel. Basically the best OLED manufacturers can do is to estimate wear for every pixel and automatically compensate for the wear. In practice, this is implemented by logically having pixels that could emit 500 cd when run at full blast and the display normally limits the max brightness around 300. After the pixel has displayed enough light over its lifetime, its estimated brightness is used to compensate for the actual output and to emit 300 cd for an old pixel, it may require current that would have resulted in 450 cd as new. However, this technique requires running the pixels with forever increasing current levels when the display ages and the more current you pump to individual pixels, the faster those pixels wear. As a result, you can prolong the life of the display only so much with this trick. In addition, if the compensation algorithm has poor match with the reality, the display will show some burn-in artefacts even with compensation being active. In the future, we'll hopefully have microled based displays where each pixel is run with direct semiconductor LED elements which do not have similar wear during use. Of course, LED elements fail over time, too, but the failure typically happens much later and not because of wear but simply as a result of poor luck. However, microled displays are really really expensive to manufacture today because nobody has figured out how to make huge semiconductor elements for cheap.
    5
  479. 5
  480. 5
  481. 5
  482. 5
  483. 5
  484. 5
  485. 5
  486. 5
  487. 5
  488. 5
  489. 5
  490. 5
  491. 5
  492. 5
  493. 5
  494. 5
  495. 5
  496. 5
  497. 5
  498. 5
  499. 5
  500. 5
  501. 5
  502. 5
  503. 5
  504. 5
  505. 5
  506. 5
  507. 5
  508. 5
  509. 5
  510. 5
  511. 5
  512. 5
  513. 5
  514. 5
  515. 5
  516. 5
  517. 5
  518. 5
  519. 5
  520. 5
  521. 5
  522. 5
  523. 5
  524. 5
  525. 5
  526. 5
  527. 5
  528. 5
  529. 5
  530. 5
  531. 5
  532. 5
  533. 5
  534. 5
  535. 5
  536. 5
  537. Actually, you cannot reduce power of a diesel engine by increasing fueling. As long as there're any unburnt oxygen atoms inside the cylinder dumping more fuel will improve power output slightly. However, the black smoke is only caused by unburnt fuel that's not successfully mixed with the oxygen. And of course the efficiency goes down because you're literally blowing unburnt diesel out out your tailpipe. It's my understanding that people doing this think that it looks cool or they imagine that the noise and smoke equals so much more power that it makes sense to do it side-effects. Some seem to do it as a protest against clean engines that somehow damage the brittle ego of the owner of said diesel car. The proper method to increase power output from a diesel engine is to increase fueling while increasing in air intake at the same time. In practice, this means high boost turbo setup with large injectors which gets pretty expensive soon. Dumping just extra fuel and "rolling coal" is the redneck version of improving engine performance. Diesel engines are always running lean because there's no throttle of any kind (the "rolling coal" setup is only version where the engine is not running lean). You always reduce fueling to reduce power. And to increase power you just increase fueling. Those of us that are not rolling coal stop injecting extra fuel when the engine would start to emit black smoke (= partially unburnt fuel which is obviously causes poor economy). Most modern diesel cars can be tuned to emit higher power with just a computer programming – however, the factory programming is selected to match the factory gearbox; unless you swap the gearbox (and maybe clutch) to stronger aftermarket alternative, the changes are high that the gearbox is going to fail pretty soon.
    5
  538. 5
  539. 5
  540. 5
  541. 5
  542. 5
  543. 5
  544. 5
  545. 5
  546. 5
  547. 5
  548. 5
  549. 5
  550. 5
  551. 5
  552. 5
  553. 5
  554. 5
  555. 5
  556. 5
  557. 4
  558. 4
  559. 4
  560. 4
  561. 4
  562. 4
  563. 4
  564. 4
  565. 4
  566. 4
  567. 4
  568. 4
  569. 4
  570. 4
  571. 4
  572. 4
  573. 4
  574. 4
  575. 4
  576. 4
  577. 4
  578. 4
  579. 4
  580. 4
  581. 4
  582. 4
  583. 4
  584. 4
  585. 4
  586. 4
  587. 4
  588. 4
  589. 4
  590. 4
  591. 4
  592. 4
  593. 4
  594. 4
  595. 4
  596. 4
  597. 4
  598. 4
  599. 4
  600. 4
  601. 4
  602. 4
  603. 4
  604. 4
  605. 4
  606. 4
  607. 4
  608. 4
  609. 4
  610. 4
  611. 4
  612. 4
  613. 4
  614. 4
  615. 43:10 The days or even weeks I'm spending in state "I'm thinking more than writing the code" is when there are no good solutions given the existing infrastructure and the task at hand. Only multiple options to proceed but each with various obvious con-sides. In practice, in that case "thinking" is about going through existing programs (searching for something similar and looking for results with pros and cons any given solution had), implementations (typically reading code of some open source libraries to understand how they handle the problematic edge cases), research papers, writing some test code etc. That's the "research" in R&D. I have trouble imagining a coder that just sits there meditating and coming up with a good solution they will finally write. Some call this maintaining a legacy system but I think it also covers making any complex changes to any big system, not matter how old or new the code is. Legacy systems are just typically bigger than newly created (toy?) projects. And you get old hairy legacy systems as a result if you repeatedly try to skip the thinking and research part and always go for the most simple solution you can think of without thinking about the con-sides. Basically: how much technical debt your next change is creating to the whole system? If you ignore the debt, making changes is faster but it will bite your ass later for sure. On the other hand, you don't want to waste time trying to create perfect solution either because perfect is enemy of good and it requires insane amounts of time to create perfect solutions.
    4
  616. 4
  617. 4
  618. 4
  619. 4
  620. 4
  621. 4
  622. 4
  623. 4
  624. 4
  625. 4
  626. 4
  627. 4
  628. 4
  629. 4
  630. 4
  631. 4
  632. 4
  633. 4
  634. 4
  635. 4
  636. 4
  637. 4
  638. 4
  639. 4
  640. 4
  641. 4
  642. 4
  643. 4
  644. 4
  645. 4
  646. 4
  647. 4
  648.  @paulfogarty7724  Considering all the crap people are saying about Ryanair, I had to check this. I quickly went through Ryanair's incident history and it indeed appears that there has never been a crash. In fact, the most serious stuff I found was as follows: FR-654 in 2019 had FO incapacitated and captain returned for a safe landing. FR-3918 in 2018 had FO incapacitated and the captain declared Mayday in line with standard operating procedures and diverted to Trapani. FR-7312 in 2018 had loss of cabin pressure, the crew reacted in 2 seconds and initiated an emergency descent as expected. FR-1192 in 2018 had near collision when two planes got within 2.2 nm from each other; the cause was the failure of the PAL sector controller to identify the conflict in time. FR-314 in 2017 Overran runway on landing and came to a stop on the paved surface of the runway. There were no injuries. FR-4060 in 2017 had a tailstrike and returned for a safe landing after burning enough fuel. FR-817 in 2016 declared pan pan because icing caused engine problems. The aircraft made a normal approach and landing at Dublin and all passengers disembarked normally. FR-2446 in 2014 a loss of separation occurred because of controller error. FR-2848 in 2014 had a near collision due controller failure, the separation between the aircraft reduced to 100 feet vertical and 1.4nm lateral. FR-3152 in 2013 had captain incapacitated, FO diverted the aircraft to Faro (Portugal) 160nm southeast of their position for a safe landing on runway 10 about 25 minutes later. FR-3595 in 2013 had the separation between the aircraft reduced to 0.8nm laterally and 650 feet vertically involving a high risk of collision. The cause seemed to be both the controller and the crew not following proper radio protocol (not using their call-sign, not requiring read back). FR-1664 in 2012 had right pitot heating failed without indication causing instrument malfunction and the crew correctly diagnosed the problem and continued for a safe landing. Some flight in 2011 with Boeing 737-800 had FO incapacitated, the aircraft landed safely on Girona's runway 20 about 45 minutes after the first officer handed the controls to the captain. Some flight in 2010 with Boeing 737-800 had to declare mayday after diverting to alternate airport and the crew inadequate decision-making causing the fuel amount dropping below the required minimum reserve fuel. Legal minimum was 1139 kg and after landing the aircraft had only 956 kg. In 2010, a little girl fell through the gap between the handrail and the platform of the stairs during boarding in Spain. The girl received fractures of the ulna and radius of the left forearm. The CIAIAC analysed that although the extendable handrails protect sufficiently against the falls of adults, the gap between the handrail and the platform represents a danger for small children to fall through the gap. In addition to that, there were technical issues causing diversions but I couldn't find anything really serious. For example, FR-7411 in 2019 had to shutdown one engine in-flight due to lack of oil pressure. Seems surprisingly good track record for any airline with similar amount of flights!
    4
  649. 4
  650. 4
  651. 4
  652. 4
  653. 4
  654. 4
  655. 4
  656. 4
  657. 4
  658. 4
  659. 4
  660. 4
  661. 4
  662. 4
  663. 4
  664. 4
  665. 4
  666. 4
  667. 4
  668. 4
  669. 4
  670. 4
  671. 4
  672. 4
  673. 4
  674. 4
  675. 4
  676. 4
  677. 4
  678. 4
  679. 3
  680. 3
  681. 3
  682. 3
  683. 3
  684. 3
  685. 3
  686. 3
  687. 3
  688. 3
  689. 3
  690. 3
  691. 3
  692. 3
  693. Yeah, the solar irradiation maps often don't bother including Scandinavia in the map at all because PV doesn't make much sense up here North. It doesn't seem to prevent PV panel dealers from trying to sell full installations, though. (PV panel setups are usually designed for 1000 W/m² radiation and you can get maybe 300 W/m² on average here in Scandinavia and the output is heavily tilted for the summertime only – if the panels were cheap enough you could just triple the basic installation sizes but it currently doesn't make much sense right. In addition, the most yearly output from those PV cells happens at the same yearly period when we need the least energy! During the peak electricity usage in the middle of the winter PV panel output is approximately 0%. Of course, you could generate synthetic fuels from the electricity during the summer time but the efficiency of that process is about 30% so you get to triple the PV panel setup once more. So to make PV panels worthwhile here in Scandinavia the prices would need to drop at least 90% compared to current best performance/cost panels. I think it will happen in the future but we're not there yet. And that's for the best case situation where you can store hydrogen from summer to winter and use fuel cells to generated electricity. If you generate e.g. synthetic methanol or gasoline you're going to lose another ~70% of the output if you want electricity out during the winter. So you would need to triple the installation yet another time. The energy storage with reasonable efficiency for half a year is still unsolved problem. If you're willing to lose about 90% of the energy in the process, it's doable already.)
    3
  694. 3
  695. 3
  696. 3
  697. 3
  698. 3
  699. 3
  700. 3
  701. 3
  702. 3
  703. 3
  704. 3
  705. 3
  706. 3
  707. 3
  708. 3
  709. 3
  710. 3
  711. 3
  712. 3
  713. 3
  714. 3
  715. 3
  716. 3
  717. 3
  718. 3
  719. 3
  720. 3
  721. 3
  722. 3
  723. 3
  724. 3
  725. 3
  726. 3
  727. 3
  728. 3
  729. 3
  730. 3
  731. 3
  732. 3
  733. 3
  734. 3
  735. 3
  736. 3
  737. 3
  738. 3
  739. 3
  740. 3
  741. 3
  742. 3
  743. 3
  744. 3
  745. 3
  746. 3
  747. 3
  748. 3
  749. 3
  750. 3
  751. 3
  752. 3
  753. 3
  754. 3
  755. 3
  756. 6:10 I think these examples are not truthful. If you took 17 year old from year 1200 and put him or her into a modern car, it would take quite some time to teach them to drive. Similarly, a ten year old kid from year 1200 would have hard time learning to use a dishwasher oneshot. It's only because those 10 and 17 year old kids have seen a lot of examples during their lives they have pretty good idea what to do. As I see it, the LLM matches the human thinking part pretty well but the AI still needs improved vision system to feed suitable data to the thinking part and the AI also needs methods to interact with the world. A human baby around the age of a couple of weeks still seems to fail to understand that he or she has these things called limbs and may be failing to understand the concept of self. We don't yet know for sure. And as a result, we don't know how close to AGI we really are. Even if we had some commonly agreed on definition for AGI, which we don't have. In fact, humans still don't have even commonly agreed on definition for even human intelligence. We have IQ tests that actually measure g-factor and we sometimes pretend that it's the same thing as intellligence but we don't really believe it in the end. Considering we still don't have cost effective way for inference step in LLM (equal or better energy efficiency compared to human brain), the big question is who can afford to run AGI even if it were invented during 2025? Let's assume that OpenAI can scale ChatGPT o4 or o5 to true AGI level but it casts $15000 per task to run because the amount of compute needed, who is going to run it for anything?
    3
  757. 3
  758. 3
  759. 3
  760. 3
  761. 3
  762. 3
  763. 3
  764. 3
  765. 3
  766. 3
  767. 3
  768. 3
  769. 3
  770. 3
  771. 3
  772. 3
  773. 3
  774. 3
  775. 3
  776. 3
  777. 3
  778. 3
  779. 3
  780. 3
  781. 3
  782. 3
  783. 3
  784. 3
  785. 3
  786. 3
  787. 3
  788. 3
  789. 3
  790. 3
  791. 3
  792. 3
  793. 3
  794. 3
  795. 3
  796. 3
  797. 3
  798. 3
  799. 3
  800. 3
  801. 3
  802. 3
  803. 3
  804. 3
  805. 3
  806. 3
  807. 3
  808. 3
  809. 3
  810. 3
  811. 3
  812. 3
  813. 3
  814. 3
  815. 3
  816. 3
  817. 3
  818. 3
  819. 3
  820. 3
  821. 3
  822. 3
  823. 3
  824. 3
  825. 3
  826. 3
  827. 3
  828. 3
  829. 3
  830. 3
  831. 3
  832. 3
  833. 3
  834. 3
  835. 3
  836. 3
  837. 3
  838. 3
  839. 3
  840. 3
  841. 3
  842. 3
  843. 3
  844. 3
  845. 3
  846. 3
  847. 3
  848. 3
  849. 3
  850. 3
  851. 3
  852. 3
  853. 3
  854. 3
  855. 3
  856. 3
  857. 3
  858. 3
  859. 3
  860. 3
  861. 3
  862. 3
  863. 3
  864. 3
  865. 3
  866. 3
  867. 3
  868. 3
  869. 3
  870. 3
  871. 3
  872. 3
  873. 3
  874. 3
  875. 3
  876. 3
  877. 3
  878. 3
  879. 3
  880. 3
  881. 3
  882. 3
  883. 3
  884. 3
  885. 3
  886. 3
  887. 3
  888. 3
  889. 3
  890. 3
  891. 3
  892. 3
  893. 3
  894. 3
  895. 3
  896. 3
  897. I like both C and Rust and even though I think that writing C code is "easier" than writing Rust (may be a skill issue on my part still) I have to honestly say that writing perfect C code is next to impossible. I think we can all agree that Linux kernel developers are close to most being most skilled C developers in the world. And look at the results: those top of the top developers writing C code that is publicly reviewed on LKML before being accepted in the kernel and we still end up with all kinds of CVEs. If these people are still not skilled enough to even avoid security bugs with C, maybe C is not a good solution in long run. Will the parts written in Rust ever have CVEs? Probably yes, but that will be much much much more rare event than with C. And such CVE would highly probably be about logic errors, not about memory corruption or about thread safety, unlike with CVEs dealing with code written in C. Yes, slowly switching to Rust in Linux kernel would require nearly all kernel developers to learn both C and Rust. I don't consider that too high minimum bar to join kernel developer community. The level of C understanding you currently need is pretty high already and Rust isn't that hard a language to learn if you really put some effort to learn it. That said, I totally understand that NIMBY effect works for software development, too. All the greybeards that have made an entire career with Linux and dealing only with C code will not be happy to hear that their system wouldn't be good enough and that they would need to learn to do something else. In the end, the important question will be "what's acceptable" when it comes to code quality and performance. If writing stuff with Rust is 10x harder, is it worth it if it can cut 90% of all the yearly security bugs? What about if it can cut only 20% of all the bugs?
    3
  898. 3
  899. 3
  900. 3
  901. 3
  902. 3
  903. 3
  904. 3
  905. 3
  906. 3
  907. 3
  908. 3
  909. 3
  910. 3
  911. 3
  912. 3
  913. 2
  914. 2
  915. 2
  916. 2
  917. 2
  918. 2
  919. 2
  920. 2
  921. 2
  922. 2
  923. 2
  924. 2
  925. 2
  926. 2
  927. 2
  928. 2
  929. 2
  930. 2
  931. 2
  932. 2
  933. 2
  934. 2
  935. 2
  936. 2
  937. 2
  938. 2
  939. 2
  940. 2
  941. 2
  942. 2
  943. 2
  944. 2
  945. 2
  946. 2
  947. 2
  948. 2
  949. 2
  950. 2
  951. 2
  952. 2
  953. 2
  954. 2
  955. 2
  956. 2
  957. 2
  958. 2
  959. 2
  960. 2
  961. 2
  962. 2
  963. 2
  964. 2
  965. 2
  966. 2
  967. 2
  968. 2
  969. 2
  970. 2
  971. 2
  972. 2
  973. 2
  974. 2
  975. 2
  976. 2
  977. 2
  978. 2
  979. 2
  980. 2
  981. 2
  982. 2
  983. 2
  984. 2
  985. 2
  986. 2
  987. 2
  988. 2
  989. 2
  990. 2
  991. 2
  992. 2
  993. 2
  994. 2
  995. 2
  996. 2
  997. 2
  998. 2
  999. 2
  1000. 2
  1001. 2
  1002. 2
  1003. 2
  1004. 2
  1005. 2
  1006. 2
  1007. 2
  1008. 2
  1009. 2
  1010. 2
  1011. 2
  1012. 2
  1013. 2
  1014. 2
  1015. 2
  1016. 2
  1017. 2
  1018. 2
  1019. 2
  1020. 2
  1021. 2
  1022. 2
  1023. 2
  1024. 2
  1025. 2
  1026. 2
  1027. 2
  1028. 2
  1029. 2
  1030. 2
  1031. 2
  1032. 2
  1033. 2
  1034. 2
  1035. 2
  1036. 2
  1037. 2
  1038. 2
  1039. 2
  1040. 2
  1041. 2
  1042. 2
  1043. 2
  1044. 2
  1045. 2
  1046. 2
  1047. 2
  1048. 2
  1049. 2
  1050. 2
  1051. 2
  1052. 2
  1053. 2
  1054. 2
  1055. 2
  1056. 2
  1057. 2
  1058. 2
  1059. 2
  1060. 2
  1061. 2
  1062. 2
  1063. 2
  1064. 2
  1065. 2
  1066. 2
  1067. 2
  1068. 2
  1069. 2
  1070. 2
  1071. 2
  1072. 2
  1073. 2
  1074. 2
  1075. 2
  1076. 2
  1077. 2
  1078. 2
  1079. 2
  1080. 2
  1081. 2
  1082. 2
  1083. 2
  1084. 2
  1085. 2
  1086. 2
  1087. 2
  1088. 2
  1089. 2
  1090. 2
  1091. 2
  1092. 2
  1093. 2
  1094. 2
  1095. 2
  1096. 2
  1097. 2
  1098. 2
  1099. 2
  1100. 2
  1101. 2
  1102. 2
  1103. 2
  1104. 2
  1105. 2
  1106. 2
  1107. 2
  1108. 2
  1109. 2
  1110. 2
  1111. 2
  1112. 2
  1113. 2
  1114. 2
  1115. 2
  1116. 2
  1117. 2
  1118. 2
  1119. 2
  1120. 2
  1121. 2
  1122. 2
  1123. 2
  1124. 2
  1125. 2
  1126. 2
  1127. 2
  1128. 2
  1129. 2
  1130. 2
  1131. 2
  1132. 2
  1133. 2
  1134. 2
  1135. 2
  1136. 2
  1137. 2
  1138. 2
  1139. 2
  1140. 2
  1141. 2
  1142. 2
  1143. 2
  1144. 2
  1145. 2
  1146. 2
  1147. 2
  1148. 2
  1149. 2
  1150. 2
  1151. 2
  1152. 2
  1153. 2
  1154. 2
  1155. 2
  1156. 2
  1157. 2
  1158. 2
  1159. 2
  1160. 2
  1161. 2
  1162. 2
  1163. 2
  1164. 2
  1165. 2
  1166. ​ @Robbya10  I think that any optical system may miss at least one of small dust, oil, frozen surface, too smooth surface vs stone like surface, moisture etc. The thing you're actually interested is how much friction you actually have between the surface and currently available rubber thread on your wheels and the only way to make sure is to actually test for exacly that. The problem is that rubber is not actually solid under pressure (rubber is somewhere between solid and liquid) so the only way to actually measure the friction is to apply enough pressure and braking force to get the rubber under realistic shear and compression forces. And at that point, you could just use your real wheels for measuring the actual surface friction. For example, every 10 seconds, try braking one (random?) tire up to shear limit while other 3 tires are pushing forward to avoid losing speed so you can do this without passangers noticing anything. If you do the braking with regenerative braking only you can recapture about 80% of the energy so that wouldn't be that waistful. Basically the only problem is that it would cause extra tire surface wear. You could also have additional sensors. For example, using microphones to listen how tires sound while they touch the surface and if the sound doesn't change, assume that the surface is still similar and you don't need to retest the friction. You could also use humidity sensors and temperature sensors to figure out if there's risk of surface ice or aquaplaning. For dry and warm locations without dust on the road, there's no need to check for the friction very much. However, on roads near 0 °C with high relative humidity, black ice is a real threat and there you might want to check every 2 seconds etc.
    2
  1167. 2
  1168. 2
  1169. 2
  1170. 2
  1171. 2
  1172. 2
  1173. 2
  1174. 2
  1175. 2
  1176. 2
  1177. 2
  1178. 2
  1179. 2
  1180. 2
  1181. 2
  1182. 2
  1183. 2
  1184. 2
  1185. 2
  1186. 2
  1187. 2
  1188. 2
  1189. 2
  1190. 2
  1191. 2
  1192. 2
  1193. 2
  1194. 2
  1195. 2
  1196. 2
  1197. 2
  1198. 2
  1199. 2
  1200. 2
  1201. 2
  1202. 2
  1203. 2
  1204. 2
  1205. 2
  1206. 2
  1207. 2
  1208. 2
  1209. 2
  1210. 2
  1211. 2
  1212. 2
  1213. 2
  1214. 2
  1215. 2
  1216. 2
  1217. 2
  1218. 2
  1219. 2
  1220. 2
  1221. 2
  1222. 2
  1223. 2
  1224. 2
  1225. 2
  1226. 2
  1227. 2
  1228. 2
  1229. 2
  1230. 2
  1231. 2
  1232. 2
  1233. 2
  1234. 2
  1235. 2
  1236. 2
  1237. 2
  1238. 2
  1239. 2
  1240. 2
  1241. 2
  1242. 2
  1243. 2
  1244. 2
  1245. 2
  1246. 2
  1247. 2
  1248. 2
  1249. 2
  1250. 2
  1251. 2
  1252. 2
  1253. 2
  1254. 2
  1255. 2
  1256. 2
  1257. 2
  1258. 2
  1259. 2
  1260. 2
  1261. 2
  1262. 2
  1263. 2
  1264. 2
  1265. 2
  1266. 2
  1267. 2
  1268. 2
  1269. 2
  1270. 2
  1271. 2
  1272. 2
  1273. 2
  1274. 2
  1275. 2
  1276. 2
  1277. 2
  1278. 2
  1279. 2
  1280. 2
  1281. 2
  1282. 2
  1283. 2
  1284. 2
  1285. 2
  1286. 2
  1287. 2
  1288. 2
  1289. 2
  1290. 2
  1291. 2
  1292. 2
  1293. 2
  1294. 2
  1295. 2
  1296. 2
  1297. 2
  1298. 2
  1299. 2
  1300. 2
  1301. 2
  1302. 2
  1303. 2
  1304. 2
  1305. 2
  1306. 2
  1307. 2
  1308. 2
  1309. 2
  1310. 2
  1311. 2
  1312. 2
  1313. 2
  1314. 2
  1315. 2
  1316. 2
  1317. 2
  1318. 2
  1319. 2
  1320. 2
  1321. 2
  1322. 2
  1323. 2
  1324. 2
  1325. 2
  1326. 2
  1327. 2
  1328. 2
  1329. 2
  1330. 2
  1331. 2
  1332. 2
  1333. 2
  1334. 2
  1335. 2
  1336. 2
  1337. 2
  1338. 2
  1339. 2
  1340.  @johanmetreus1268  I agree. I think the least worst solution would be something that's sometimes called "declared value copyright system". The idea is to grant gratis copyright similar to copyright we currently have for 5 years only. After that all works fall in public domain immediately. You can extend your copyright indefinitely by declaring it's monetary value and paying 1% (exact figure to be decided) of its value as yearly fee in exchange for the public not having the works in public domain and using legislation to protect your intellectual property. If anybody pays the sum of declared value to you, the work immediately falls to public domain. This would have following results: (1) If your work has no monetary value for you, you won't register it for a fee and we have more public domain content. (2) If you register the work, you have strong incentive to declare true value of the work. If you overinflate the declared value, you have to pay overinflated yearly fee to keep your copyright. If you undervalue the declared value, you pay lower yearly fee and the public can free your work for less than it's worth. (3) Nobody can force you to lose access to the work because it's not forced sell but forced release to public domain. Once you get the monetary compensation that you've decided (the declared value for the work) you should be happy with the compensation and the whole public can then freely enjoy your work. (4) Disney could keep Mickey Mouse behind the bars but they couldn't do that without paying compensation to the society. Mickey Mouse would get free once Mickey wouldn't make enough money every year. (5) If you encounter any piece of work that you know is older than 5 years and it hasn't been registered, you can be sure it's public domain. I'm pretty sure everybody would agree that having more public domain content would be better for everybody. The current system results in huge amount of abandoned work with zero financial value to be unusable for a century!
    2
  1341. 2
  1342. 2
  1343. 2
  1344. 2
  1345. 2
  1346. 2
  1347. 2
  1348. 2
  1349. 2
  1350. 2
  1351. 2
  1352. 2
  1353. 2
  1354. 2
  1355. 2
  1356. 2
  1357. 2
  1358. 2
  1359. 2
  1360. 2
  1361. 2
  1362. 2
  1363. 2
  1364. 2
  1365. 2
  1366. 2
  1367. 2
  1368. 2
  1369. 2
  1370. 2
  1371. 2
  1372. 2
  1373. 2
  1374. 2
  1375. 2
  1376. 2
  1377. 2
  1378. 2
  1379. 2
  1380. 2
  1381. 2
  1382. 2
  1383. 2
  1384. 2
  1385. 2
  1386. 2
  1387. 2
  1388. 2
  1389. 2
  1390. 2
  1391. 2
  1392. 2
  1393. 2
  1394. 2
  1395. 2
  1396. 2
  1397. 2
  1398. 2
  1399. 2
  1400. 2
  1401. 2
  1402. 2
  1403. 2
  1404. 2
  1405. 2
  1406. 2
  1407. 2
  1408. 2
  1409. 2
  1410. 2
  1411. 2
  1412. 2
  1413. 2
  1414. 2
  1415. 2
  1416. 2
  1417. 2
  1418. 2
  1419. 2
  1420. To make a pretty good prediction for I/O operations requires keeping track of average latency per file metadata plus average bandwidth for read and write. Many I/O devices have nearly same latency for a single 4 KB and 1 MB read operation(*) because the overhead for the start of the operation is so big - especially with HDD. As a result, you would need to know how fragmented the filesystem is to give a good estimate because reading a big file from fragmented filesystem requires reading huge amount of small blocks. The there's the psychological side of the time estimates: you should always overestimate the operation a bit because the user will be much happier with estimate of 11 minutes and the actual operation taking 8 minutes than the other way around. If Windows file transfer estimates kept historical performance data for source, target and the pathway it could give more accurate estimates once it knows how many files and directories it's going to copy with the sizes of each of those files. The same thing for network transfers: it could figure out that the max you can get from your internet connection is 10 Mbps so no estimate should estimate faster progress than that no matter what the source server is. On the other hand, it might learn that some random server is really slow and always has limit around 2 Mbps. The next question would be: how important are those estimates for you? Do you think there would be something more important to develop or fix in the system? The modern solution would be to feed all the data you have into some kind of AI system and hope that it returns a good estimate. For most cases, it might be okay to just compute total number of files and total number of bytes and compute total progress so far for files and bytes every second. And use the progress estimate that results in longer estimate. For non-changing performance it should converge to correct value rapidly and for more complex cases, at least it shouldn't cause any rapid changes in the estimate. (*) The only major exception is Intel Optane drives. Those have really low latency and can do small operations much faster than any flash device by other manufacturers. And the reason Intel can do this is because Optane doesn't use regular NAND like all other drives do.
    2
  1421. 2
  1422. 2
  1423. 2
  1424. 2
  1425. 2
  1426. 2
  1427. 2
  1428. 2
  1429. 2
  1430. 2
  1431. 2
  1432. 2
  1433. 2
  1434. 2
  1435. 2
  1436. 2
  1437. 2
  1438. 2
  1439. 2
  1440. 2
  1441. 2
  1442. 2
  1443. 2
  1444. 2
  1445. 2
  1446. 2
  1447. 2
  1448. 2
  1449. 2
  1450. 2
  1451. 2
  1452. 2
  1453. 2
  1454. 2
  1455. 2
  1456. 2
  1457. 2
  1458. 2
  1459. 2
  1460. 2
  1461. 2
  1462. 2
  1463. 2
  1464. 2
  1465. 2
  1466. 2
  1467. 2
  1468. 2
  1469. 2
  1470. 2
  1471. 2
  1472. 2
  1473. 2
  1474. 2
  1475. 2
  1476. 2
  1477. 2
  1478. 2
  1479. 2
  1480. 2
  1481. 2
  1482. 2
  1483. 2
  1484. 2
  1485. 2
  1486. 2
  1487. 2
  1488. 2
  1489. 2
  1490. 2
  1491. 2
  1492. 2
  1493. 2
  1494. 2
  1495. 2
  1496. 2
  1497. 2
  1498. 2
  1499. 2
  1500. 2
  1501. 2
  1502. 2
  1503. 2
  1504. 2
  1505. 2
  1506. 2
  1507. 2
  1508. 2
  1509. 2
  1510. 2
  1511. 2
  1512. 2
  1513. 2
  1514. 2
  1515. 2
  1516. 2
  1517. 2
  1518. 2
  1519. 2
  1520. 2
  1521. 2
  1522. 2
  1523. 2
  1524. 2
  1525. 2
  1526. 2
  1527. 2
  1528. 2
  1529. 2
  1530. 2
  1531. 2
  1532. 2
  1533. 2
  1534. 2
  1535. 2
  1536. 2
  1537. 2
  1538. 2
  1539. 2
  1540. 2
  1541. 2
  1542. 2
  1543. 2
  1544. 2
  1545. 2
  1546. 2
  1547. 2
  1548. 2
  1549. 2
  1550. 2
  1551. 2
  1552. 2
  1553. 2
  1554. 2
  1555. 2
  1556. 2
  1557. 2
  1558. 2
  1559. 2
  1560. 2
  1561. 2
  1562. 1
  1563. 1
  1564. 1
  1565. 1
  1566. 1
  1567. 1
  1568. 1
  1569. 1
  1570. 1
  1571. 1
  1572. 1
  1573. 1
  1574. 1
  1575. 1
  1576. 1
  1577. 1
  1578. 1
  1579. 1
  1580. 1
  1581. 1
  1582. 1
  1583. 1
  1584. 1
  1585. 1
  1586. 1
  1587. 1
  1588. 1
  1589. 1
  1590. 1
  1591. 1
  1592. 1
  1593. 1
  1594. 1
  1595. 1
  1596. 1
  1597. 1
  1598. 1
  1599. 1
  1600. 1
  1601. 1
  1602. 1
  1603. 1
  1604. 1
  1605. 1
  1606. 1
  1607. 1
  1608.  @Kabivelrat  The problem with photolithography is that a single silicon wafer (circle with diameter close to 300 mm) costs between $10K and $30K. A single PC monitor display made using photolithography to create the micro-LED surfaces would definetely work but even a smallish monitor would need 4 wafers to build. So a single micro-LED monitor made this way would cost between $40K and $120K which is a bit more than an average consumer is willing to pay for a PC monitor. That's why I wrote that if somebody can figure out how to manufacture micro-LED displays for cheap enough, then it will be the winning technology. If you're willing to replace OLED monitor every 1000 hours, you can keep using OLED monitors without any burn-in prevention methods that affect the image quality so that's the maximum cost for any new display technology. Any tech that would be more expensive than replacing OLED monitors every 1000 hours would be too expensive to manufacture. (This is because if you accept that OLED monitor has lifetime of 1000 hours only, you can run it bright enough regardless of damage to organic componds so that you have only the good sides of OLED technology.) The reason for looking for more vibrant color primaries is ability to produce more vibrant colors. Modern displays cannot produce e.g. really vibrant green or yellow because the maximum green or maximim red are not vibrant enough to cover the whole abilities of human eye. Current monitor tech can only show bright (that is, lots of photons) red and green. If you want to improve this, you need either more vibrant primaries or more than 3 primaries. And it turns out that using more vibrant primaries e.g. with quantum dots is easier way forward considering the existing technologies and software. The problem with using more vibrant primaries is that the current 8–10 bits per subpixel color spaces are too small and cause visible banding if the color space tries to cover the more vibrant primaries. However, it's much much easier to change software from 8 bits per subcolor to 16 bits per subpixel than to convert to 4–6 primary colors so my guess is that we'll see 16 bits per subpixel (48 bit in total) color spaces in the future. In practice, the color data will probably use 64 bits per pixel with 16 bit padding to make each pixel start with 64 bit boundary which is easier for the memory address computations.
    1
  1609. 1
  1610. 1
  1611. 1
  1612. 1
  1613. 1
  1614. 1
  1615. 1
  1616. 1
  1617. 1
  1618. 1
  1619. 1
  1620. 1
  1621. 1
  1622. 1
  1623. Your video production quality is getting ridiculous high! And I don't mean that it's a problem. As for the content, it seemed that TransAsia had get-there-itis with their captain training program. I would assume that there were cultural issues, too. If I've undertood correctly, denying a request is considered very rude in many Asian cultures and that would make it harder for the Captain A to be denied the type certification. The pilot certification seemed a lot like p-hacking in bad research where you keep retrying until you get borderline acceptable result and then declare the test successful immediately. I fully agree that this was systematic failure within the company instead of the pilot just doing things against better judgement. This was underlined by the fact that the first airline with this person correctly grounded him or her because of multiple problems in training. I'm still wondering how on earth did this captain have successful career in the airforce? I would have expected airforce to be more strict about all pilot abilities than any commercial airline. I think pilot certification should be based on scientific measurements. For example, a valid method would be to decide minimum acceptable success rate for any task during the training, say 95%. And this success rate should be defined while designing the course, not per applicant. And if you fail some task (e.g. engine failure during takeoff in the simulator) you have to then repeat the task so many times that you can exceed the required rate – in practice it would mean that if you fail the task once, you have to then successfully repeat the same task for total of 19 times to get the success rate to 95% or better. And that would just barely demonstrate that you have 95% success rate for correctly handling the situation. It seemed that the Captain A had 25% or less success rate for handling engine failure during take-off even in the simulator. In real world situation the stress level would be even higher so the probability is going to get only worse.
    1
  1624. 1
  1625. 1
  1626. 1
  1627. 1
  1628. 1
  1629. 1
  1630. 1
  1631. 1
  1632. 1
  1633. 1
  1634. 1
  1635. 1
  1636. 1
  1637. 1
  1638. 1
  1639. 1
  1640. 1
  1641. 1
  1642. 1
  1643. 1
  1644. 1
  1645. 1
  1646. 1
  1647. 1
  1648. 1
  1649. 1
  1650. 1
  1651. 1
  1652. 1
  1653. 1
  1654. 1
  1655. 1
  1656. 1
  1657. 1
  1658. 1
  1659. 1
  1660. 1
  1661. 1
  1662. 1
  1663. 1
  1664. 1
  1665. 1
  1666. 1
  1667. 1
  1668. 1
  1669. 1
  1670. 1
  1671. 1
  1672. 1
  1673. 1
  1674. 1
  1675. 1
  1676. 1
  1677. 1
  1678. 1
  1679. 1
  1680. 1
  1681. 1
  1682. 1
  1683. 1
  1684. 1
  1685. 1
  1686. 1
  1687. 1
  1688. 1
  1689. 1
  1690. 1
  1691. 1
  1692. 1
  1693. 1
  1694. 1
  1695. 1
  1696. 1
  1697. 1
  1698. 1
  1699. 1
  1700. 1
  1701. 1
  1702. 1
  1703. 1
  1704. 1
  1705. 1
  1706. 1
  1707. 1
  1708. 1
  1709. 1
  1710. 1
  1711. 1
  1712. 1
  1713. 1
  1714. 1
  1715. 1
  1716. 1
  1717. 1
  1718. 1
  1719. 1
  1720. 1
  1721. 1
  1722. 1
  1723. 1
  1724. 1
  1725. 1
  1726. 1
  1727. 1
  1728. 1
  1729. 1
  1730. 1
  1731. 1
  1732. 1
  1733. 1
  1734. 1
  1735. 1
  1736. 1
  1737. 1
  1738. 1
  1739. 1
  1740. 1
  1741. 1
  1742. 1
  1743. 1
  1744. 1
  1745. 1
  1746. 1
  1747. 1
  1748. 1
  1749. 1
  1750. 1
  1751. 1
  1752. 1
  1753. 1
  1754. 1
  1755. 1
  1756. 1
  1757. 1
  1758. 1
  1759. 1
  1760. 1
  1761. 1
  1762. 1
  1763. 1
  1764. 1
  1765. 1
  1766. 1
  1767. 1
  1768. 1
  1769. 1
  1770. 1
  1771. 1
  1772. 1
  1773. 1
  1774. 1
  1775. 1
  1776. 1
  1777. 1
  1778. 1
  1779. 1
  1780. 1
  1781. 1
  1782. 1
  1783. 1
  1784. 1
  1785. 1
  1786. 1
  1787. 1
  1788. 1
  1789. 1
  1790. 1
  1791. 1
  1792. 1
  1793. 1
  1794. 1
  1795. 1
  1796. 1
  1797. 1
  1798. 1
  1799. 1
  1800. 1
  1801. 1
  1802. 1
  1803. 1
  1804. 1
  1805. 1
  1806. 1
  1807. 1
  1808. 1
  1809. 1
  1810. 1
  1811. 1
  1812. 1
  1813. 1
  1814. 1
  1815. 1
  1816. 1
  1817. 1
  1818. 1
  1819. 1
  1820. 1
  1821. 1
  1822. 1
  1823. 1
  1824. 1
  1825. 1
  1826. 1
  1827. 1
  1828. 1
  1829. 1
  1830. 1
  1831. 1
  1832. 1
  1833. 1
  1834. 1
  1835. 1
  1836. 1
  1837. 1
  1838. 1
  1839. 1
  1840. 1
  1841. 1
  1842. 1
  1843. 1
  1844. 1
  1845. 1
  1846. 1
  1847. 1
  1848. 1
  1849. 1
  1850. 1
  1851. 1
  1852. 1
  1853. 1
  1854. 1
  1855. 1
  1856. 1
  1857. 1
  1858. 1
  1859. 1
  1860. 1
  1861. 1
  1862. 1
  1863. 1
  1864. 1
  1865. 1
  1866. 1
  1867. 1
  1868. 1
  1869. 1
  1870. 1
  1871. 1
  1872. 1
  1873. 1
  1874. 1
  1875. 1
  1876. 1
  1877. 1
  1878. 1
  1879. 1
  1880. 1
  1881. 1
  1882. 1
  1883. 1
  1884. 1
  1885. 1
  1886. 1
  1887. 1
  1888. 1
  1889. 1
  1890. 1
  1891. 1
  1892. 1
  1893. 1
  1894. 1
  1895. 1
  1896. 1
  1897. 1
  1898. 1
  1899. 1
  1900. 1
  1901. 1
  1902. 1
  1903. 1
  1904. 1
  1905. 1
  1906. 1
  1907. 1
  1908. 1
  1909. 1
  1910. 1
  1911. 1
  1912. 1
  1913. 1
  1914. 1
  1915. 1
  1916. 1
  1917. 1
  1918. 1
  1919. 1
  1920. 1
  1921. 1
  1922. 1
  1923. 1
  1924. 1
  1925. 1
  1926. 1
  1927. 1
  1928. 1
  1929. 1
  1930. 1
  1931. 1
  1932. 1
  1933. 1
  1934. 1
  1935. 1
  1936. 1
  1937. 1
  1938. 1
  1939. 1
  1940. 1
  1941. 1
  1942. 1
  1943. 1
  1944. 1
  1945. 1
  1946. 1
  1947. 1
  1948. 1
  1949. 1
  1950. 1
  1951. 1
  1952. 1
  1953. 1
  1954. 1
  1955. 1
  1956. 1
  1957. 1
  1958. 1
  1959. 1
  1960. 1
  1961. 1
  1962. 1
  1963. 1
  1964. 1
  1965. 1
  1966. 1
  1967. 1
  1968. 1
  1969. 1
  1970. I'm running PREEMPT kernel even on desktop system because it seems to reduce stutter, jitter and random stalls enough to be noticeable for me. However, it causes lower average throughput (for the same hardware) so it's not without side-effects either. If your brain is happy to use e.g. Bluetooth audio which has built-in delay around 40–200 ms then you probably don't need RT or PREEMPT for anything. I'm unfortunate enough to have brain that can detect latency around 6 ms and I cannot stand to use bluetooth audio because of its latency, and I'm also very sensitive to jitter in the display. For me, it's better to sacrifice some average performance to get rid of those latency spikes that become too noticeable for my taste. I'm looking forward to see if I can see the difference between RT and PREEMPT kernels. I'd hope not because otherwise it would mean I'm probably going to have to sacrifice even more throughput to run RT kernel at all times. If the kernel is controlling something external such as CNC machinery, then RT kernel is obviously mandatory because it is not acceptable for kernel to randomly fail to stop the moving parts of the machines at the correct time. The traditional solution without RT kernels has been more hardware: instead of kernel controlling the CNC machine directly, it controls an extra piece of hardware that does nothing but runs the machine constantly. If you don't have multitasking kernel nor use hardware that can randomly run SMI at will, you have 100% known latency at all times. RT kernel is about allowing hardware to be used for true realtime processing at the same time as using extra CPU cycles for non-RT tasks such as user visible GUI interface.
    1
  1971. 1
  1972. 1
  1973. 1
  1974. 1
  1975. 1
  1976. 1
  1977. 1
  1978. 1
  1979. 1
  1980. 1
  1981. 1
  1982. 1
  1983. 1
  1984. 1
  1985. 1
  1986. 1
  1987. 1
  1988. 1
  1989. 1
  1990. 1
  1991. 1
  1992. 1
  1993. 1
  1994. 1
  1995. 1
  1996. 1
  1997. 1
  1998. 1
  1999. 1
  2000. 1
  2001. 1
  2002. 1
  2003. 1
  2004. 1
  2005. 1
  2006. 1
  2007. 1
  2008. 1
  2009. 1
  2010. 1
  2011. 1
  2012. 1
  2013. 1
  2014. 1
  2015. 1
  2016. 1
  2017. 1
  2018. 1
  2019. 1
  2020. 1
  2021. 1
  2022. 1
  2023. 1
  2024. 1
  2025. 1
  2026. 1
  2027. 1
  2028. 1
  2029. 1
  2030. 1
  2031. 1
  2032. 1
  2033. 1
  2034. 1
  2035. 1
  2036. 1
  2037. 1
  2038. 1
  2039. 1
  2040. 1
  2041. 1
  2042. 1
  2043. 1
  2044. 1
  2045. 1
  2046. 1
  2047. 1
  2048. 1
  2049. 1
  2050. 1
  2051. 1
  2052. 1
  2053. 1
  2054. 1
  2055. 1
  2056. 1
  2057. 1
  2058. 1
  2059. 1
  2060. 1
  2061. 1
  2062. 1
  2063. 1
  2064. 12:50 I think the problem is that majority of customers do not understand what they need. The whole idea of capitalism depends on market competing on similar products. However, when the customer base doesn't understand technology, they cannot actually estimate the value of each product accurately. As a result, majority of the customers guess that all tech products are the same (to them they are, because they don't understand the differences between products) and then the price sticker is the only thing that matters. The more high-tech your product is, the smaller minority of possible customer base will understand your product. And educating your potential customer base is really really hard. Some marketing departments seem to think that you simply need more ads but that's obviously not true. And Apple fans seem to guess that not having any visible screws means high-end so that's where Apple has focus. It's basically result of evolutionary process to match an average customer's nearly non-existant understanding of tech products. If they cannot understand the differences in software or hardware (for the electronics), they will evaluate the product on level they can understand. Because all screens are basically rectangles and average customer has surpringly poor eyesight, you simply need good enough display and there's no competition there. Next thing is the design around the screen and that's where Apple puts most of it effort and it seems to work fine for them money-wise. And the situation is only made worse by too-long-to-read EULAs that majority happily clicks though to be able to enter their credit card details.
    1
  2065. 1
  2066. 1
  2067. 1
  2068. 1
  2069. 1
  2070. I totally agree! And even if you're the only member of the team, clearly separating the code from the tools allows you to switch tools later if you find a better tool. I mostly write PHP code in RAII style for my daily job (doing that for nearly 20 years already) and I've gone through xedit, kate, jedit, Eclipse PDT, PhpStorm and VS Code. Each has had pros and cons. It appears that the DLTK library that the Eclipse PDT is based on understands complex class hierarchies better than any of the other tools but Eclipse often has performance problems. PhpStorm has somewhat similar experience – it can decipher the code and do some sanity checking automatically and it can find some things that Eclipse cannot find and vice versa. And PhpStorm, running on JVM, too, has its own performance problems, too. VS Code seems like a lot dumber when it comes to how much it understands about the code (this may be actually caused by the PHP Intelephense extension that you practically have to use to make VS Code to understand anything about PHP), but the VS Code has very stable performance: always acceptable but never truly great. The biggest problem I have with VS Code right now is that you cannot have code line level blame active while writing code. And the diff view between workspace and history is read only! And all the Git tools in Eclipse, PhpStorm or VS Code are actually pretty week. I prefer "git gui" and "gitk" any day compared to any of those. The gitk may not have the flashiest graphics but it can easily handle project histories with over 10000 commits unlike most graphical tools. And "git gui blame" has better tools for figuring true history of a given line of code than any other tool. And git gui has superior interface for committing lines instead of files. VS Code makes it really hard to build commits based on specific lines only over multiple files, instead of snapshotting everything in the working directory.
    1
  2071. 1
  2072. 1
  2073. 1
  2074. 1
  2075. 1
  2076. 1
  2077. 1
  2078. 1
  2079. 1
  2080. 1
  2081. 1
  2082. 1
  2083. 1
  2084. 1
  2085. 1
  2086. 1
  2087. IMO, I think this can be battled the same way AI should be trained even today: start with a small amount of trusted data (hand selected articles and books from various fields verified by experts). Then for every new piece of content, make the AI estimate the quality of the data (can it be deduced from the trusted data?) and skip training if the content is not (yet?) considered high enough quality. Note that the quality of the data is about it being well supported by the existing knowledge. Do this multiple times to create chains of high quality data (that is, the potential content was not deemed trusted earlier but now that the AI has learned more, it will estimate the quality of the same data differently). Keep track of the estimated quality of a given piece of data and recompute the estimated quality again for all documents every now and then. If the quality estimate of the AI is good enough, it should increase the estimated quality of the content over time (because more chains allows accepting more new data) and cases where previously trusted content turns on untrusted later would point out problems in the system. Also run the estimation against known good high quality data not included in the training set every now and then. These should be considered high quality data but the AI may fail to identify the data correctly, which would demonstrate lack of general understanding by the AI. Once you demonstrate that the estimated quality matches well enough with the expert evaluations of the same content, you can start to train the AI to understand misunderstandings of humans, too. Train low quality content as examples of humans failing to think / research correctly. In the end, you should have an AI that can successfully estimate quality of any new content and automatically use it to either extend its knowledge (chains of known good content) or to automatically learn it as an example of low quality content that the AI should avoid but the AI should be made aware of. If the AI doesn't have negative feedback from failed human content, it cannot understand failures in tasks given to said AI.
    1
  2088. 1
  2089. 1
  2090. 1
  2091. 1
  2092. 1
  2093. 1
  2094. 1
  2095. 1
  2096. 1
  2097. 1
  2098. 1
  2099. 1
  2100. 1
  2101. 1
  2102. 1
  2103. 1
  2104. 1
  2105. 1
  2106. 1
  2107. 1
  2108. 1
  2109. 1
  2110. 1
  2111. 1
  2112. 1
  2113. 1
  2114. 1
  2115. 1
  2116. 1
  2117. 1
  2118. 1
  2119. 1
  2120. 1
  2121. 1
  2122. 1
  2123. 1
  2124. 1
  2125. 1
  2126. 1
  2127. 1
  2128. 1
  2129. 1
  2130. 1
  2131. 1
  2132. 1
  2133. 1
  2134. 1
  2135. 1
  2136. 1
  2137. 1
  2138. 1
  2139. 1
  2140. 1
  2141. 1
  2142. 1
  2143. 1
  2144. 1
  2145. 1
  2146. 1
  2147. 1
  2148. 1
  2149. I maintain my old car myself and I'd say switching to electric car wouldn't be that different. Some things to consider: (1) There's no available information on the cars because manufacturers declare everything as trade secret. As such, diagnosing stuff is hard. This is actually not much worse than ICE cars because for example my year 1999 VW Passat doesn't have publicly available information from the manufacturer either and I have to dig for information in various non-official sources and 3rd party paper manuals. This is same as not having the schematics for Apple products that Louis is talking about. I don't need source code for the ECU but I would need official spec for e.g. diagnostic channel 10 subchannel 2: what values are in the spec and what sensor does measure this channel. Note that in case of VAG (VW, Audi, etc) even the channel handshake is secret so obviously that should be publicly specified, too. Having secret handshakes that are not about security (the protocol is just proprietary even if it uses standard voltages and standard voltages but it doesn't have any secret keys or anything like that!) are just about making things harder unless you pay extra random for the manufacturer to get the official tool to read the diagnostic channels. The official tool costs about $4000 and 3rd party cell phone hardware + software license to do the same thing costs $70. The 3rd party hardware and software has been created only through reverse engineering because even they couldn't get the required information so you're basically paying the 3rd party for the reverse engineering work, not because the software does something special. (2) At least here in Finland, all electrical work is so heavily restricted that any work on the circuits exceeding 50 V are not allowed even if you know what you're doing, unless you're officially licensed to work as an electrician by the government. In practice the license is so hard to acquire that even if you know 100% of the information / already have all the skills needed it still requires minimum of 1 year of supervised work experience on the field to get license for yourself. So hobbyists cannot fix electric cars here unless the legislation is fixed. The point (1) is not that different from the current situation with ICE cars so that wouldn't affect my ability to maintain my own car. It's the same as Louis having to source Apple schematics from shady sources so it just makes things harder in practice but not impossible. The point (2) is a new problem where legislation that's originally created to improve safety in housing wiring is now affecting my ability maintain my own car. And unless right-to-repair is first made reality, there's no real incentive to even start to fix point (2). It's a variant of chicken and egg problem and in this case it doesn't make sense to fix legislation unless we have clear way to actually have the parts needed for the repairs in the future. And if we had right-to-repair that would already allow licensed electricity technicians to start providing services for car high voltage work without manufacturer license.
    1
  2150. 1
  2151. 1
  2152. 1
  2153. 1
  2154. 1
  2155. 1
  2156. 1
  2157. 1
  2158. 1
  2159. 1
  2160. 1
  2161. 1
  2162. 1
  2163. 1
  2164. 1
  2165. 1
  2166. 1
  2167. 1
  2168. 1
  2169. 1
  2170. 1
  2171. 1
  2172. 1
  2173. 1
  2174. 1
  2175. 1
  2176. 1
  2177. 1
  2178. 1
  2179. 1
  2180. 1
  2181. I think that planned obsolescence as an intentional process is nonsense, too. Manufacturers just optimize to minimize their responsibility and if the device lasts the warranty period, it's no longer the responsibility of the manufacturer. Ask for warranty periods where it starts to make sense for the manufacturer to repair things to honor the warranty and things will get better automatically. If it's cheaper for the manufacturer to replace your whole device in case of warranty instead of repairing it, they will give zero effort to make the device easier to repair. This is result of most devices being good enough to last the warranty period so having the replace a single device in case of rare incident, it's cheaper overall to just give a full new device in case of even a minor failure. And as a bonus, most people actually like to have fully new device in case they hit any warranted failure. For example, Bose is known to give you full new device in original factory package in case of any error in their products. They can sell their products with extra premium because their customers can trust that in case of problems, they will get a totally new replacement no questions asked. Of course, that requires that the warranty is really strict about what's covered and what's not or everybody is going to receive new devices which would get too expensive for the manufacturer to continue. If you accept e.g. a smartphone that has one year warranty for the hardware and software support is EOL'd 18–24 months after the release, you're part of the problem! That said, the fact that manufacturers are allowed to hide the tools needed to do repairs is the biggest issue with right-to-repair. But it has nothing to do with planned obsolescence.
    1
  2182. 1
  2183. 1
  2184. 1
  2185. 1
  2186. 1
  2187. 1
  2188. 1
  2189. 1
  2190. 1
  2191. 1
  2192. 1
  2193. 1
  2194. 1
  2195. 1
  2196. 1
  2197. 1
  2198. 1
  2199. 1
  2200. 1
  2201. 1
  2202. 1
  2203. 1
  2204. 1
  2205. 1
  2206. 1
  2207. 1
  2208. 1
  2209. 1
  2210. 1
  2211. 1
  2212. 1
  2213. 1
  2214. 1
  2215. 1
  2216. 1
  2217. 1
  2218. 1
  2219. 1
  2220. 1
  2221. 1
  2222. 1
  2223. 1
  2224. 1
  2225. 1
  2226. 1
  2227. 1
  2228. 1
  2229. 1
  2230. 1
  2231. 1
  2232. 1
  2233. 1
  2234. 1
  2235. 1
  2236. 1
  2237. 1
  2238. 1
  2239. 1
  2240. 1
  2241. 1
  2242. 1
  2243. I would argue that this is actually a security vulnerability in Windows .bat execution instead of vulnerability in Rust. Unlike in POSIX shells where the command line is parsed and executed by the shell and arg() correctly encodes the arguments, Windows .bat execution takes the correctly encoded user input as a single parameter and then internally make another interpretation of the arguments! The reason windows binaries do this is because their command line is too stupid to do any processing so all processing must be re-implemented by every program you run. And as usual, you MUST encode all untrusted user input in a way that's appropriate for the context and for this case, you need double encoding: encode once for .bat argument syntax and another time for the Rust Command syntax where the arg() is the correct solution. This is similar to having to encode a piece of user input as JavaScript string and add another encoding step to encode the JavaScript as part of HTML input. Failing to do either of the two required steps results in injection attack. Current Rust doesn't have a "bugfix" for this vulnerability. Instead they fixed the documentation to explain this to Windows developers that may not be aware of this Windows behavior: "On Windows use caution with untrusted inputs. Most applications use the standard convention for decoding arguments passed to them. These are safe to use with arg. However some applications, such as cmd.exe and .bat files, use a non-standard way of decoding arguments and are therefore vulnerable to malicious input. In the case of cmd.exe this is especially important because a malicious argument can potentially run arbitrary shell commands." On Windows, there are no generic safe way to encode command line arguments as data. The correct encoding depends on the command you execute. And as a result, arg() cannot be modified to have generic safe encoding for all commands.
    1
  2244. 1
  2245. 1
  2246. 1
  2247. 1
  2248. 1
  2249. 1
  2250. 1
  2251. 1
  2252. 1
  2253. 1
  2254. 1
  2255. 1
  2256. 1
  2257. 1
  2258. 1
  2259. 1
  2260. 1
  2261. 1
  2262. 1
  2263. 1
  2264. 1
  2265. I guess "xda" was supposed to be Extended Digital Assistant. I still think that custom ROMs would still be better than OEM firmware but DRM crap prevents me from running custom ROM today because so many apps look for hardware DRM these days and with DRM running on ring -1 and OS on ring 0, there's no way to break the hardware DRM without modifying the apps you run. Make no mistake, true DRM doesn't exists if you own the hardware but if you don't own your hardware, then you cannot fake hardware based DRM. Remote systems can check if the system is running OEM firmware because hardware DRM allows remote attestation. The best we currently have is workarounds that make the user mode program believe that the device doesn't have hardware DRM and instead the software must accept soft DRM which is easy to fake. Passing the remote attestation doesn't guarantee that the system has been rooted at runtime but it does guarantee that the system has non-modified boot sector, assuming the DRM hardware is working. This is obviously easy prevent in user mode apps simply by not accepting fallback to non-hardware DRM for remote attestation. And since Android 8.0, no OEM has been able to release new devices with pre-installed Google Play Store unless the hardware passed CTS which enforced hardware-based SafetyNet. As a result, app developers could stop supporting non-hardware attestation any day now and only lose customers still running Android 7.0 or older. That's basically nobody, so there's no practical benefit for allowing non-hardware attestation for the software developers! Basically the only way to break the hardware based SafetyNet is to find a vulnerability in the firmware boot sequence to get your own code running on ring -1 to allow faking hardware DRM requests. And if this gets common, Google can simply blacklist that specific OEM identity to always distrust any hardware DRM attestation from that specific hardware. As a result, if you know how to fake hardware SafetyNet on some hardware, you cannot tell about it publicly if you want to keep that ability! As a bonus, Google will pay you minimum of 100K USD if you tell them how to bypass the hardware based attestation on any hardware so there's kind of incentive to not try to hide your work. The hardware attestation is based on digital signatures and device specific digital key that can sign messages that can be verified on remote servers. As a result, if e.g. Netflix wants to enforce DRM, they can setup their app connection to their network to require hardware attestation for login. If you block the DRM data or try to fake it, the device can no longer connect to Netflix network because Netflix knows that (1) all relevant hardware supports hardware based DRM so you cannot modify the response to claim that your hardware doesn't support hardware based attestation, and (2) the hardware response is digitally signed so you cannot change the response without failing the attestation. Obviously DRM for offline situations can still be broken at will. But for online stuff, the remote attestation cannot be broken. For me, the single most important thing to root an Android device is to get a fully working backup solution. I hate that I cannot fully backup my Android device and the only way to fix it is to break all software that wants to look for DRM hardware attestation. There's no way to have both DRM remote attestation and a working backup solution (that can restore everything in case your hardware fails and you replace it with identical hardware). As a result, I nowadays have only partial backups (basically what adb backup allows). Even iPhone has better backups! If you only wanted root and accept unsafe OS, you could simply skip all the security updates and re-root the whole OS at runtime after every boot to get root access and still keept the OEM bootloader and firmware, using a security vulnerability that allows getting root access with non-modified OEM software. However, that doesn't allow running TWRP which would be required for full backup and restore. Using runtime rooting would still allow using TitaniumBackup for installing and restoring software but you would need to be running knowingly vulnerable OS which allows any other untrusted software to also root your system. Which is obviously non-safe unlike running a properly rooted Android. I no longer own my phone and I hate it. And if Apple ever allows running other browsers than Safari, I no longer have a reason to use Android instead of iPhone because then both ecosystems are equally limited!
    1
  2266. 1
  2267. 1
  2268. 1
  2269. 1
  2270. 1
  2271. 1
  2272. 1
  2273. 1
  2274. 1
  2275. 1
  2276. 1
  2277. 1
  2278. 1
  2279. 1
  2280. 1
  2281. 1
  2282. 1
  2283. 1
  2284. 1
  2285. 1
  2286. 1
  2287. 1
  2288. 1
  2289. 1
  2290. 1
  2291. 1
  2292. 1
  2293. 1
  2294. 1
  2295. 1
  2296. 1
  2297. 1
  2298. 1
  2299. 1
  2300. 1
  2301. 1
  2302. 1
  2303. 1
  2304. 1
  2305. 1
  2306. 1
  2307. 1
  2308. 1
  2309. 1
  2310. 1
  2311. 1
  2312. 1
  2313. 1
  2314. 1
  2315. 1
  2316. 1
  2317. 1
  2318. 1
  2319. 1
  2320. 1
  2321. 1
  2322. 1
  2323. 1
  2324. 1
  2325. 1
  2326. 1
  2327. 1
  2328. 1
  2329. 1
  2330. Finnish is way more advanced here. Not only it doesn't have gender for nouns, it also doesn't distinguish between he vs she in 3rd person references. In Finnish, the 3rd person is referenced as "hän" and it can refer to any human being: man, woman, child, elder or baby. Finnish still has "se" meaning "it" which is used for all non-human references such as dogs, cats and bees. Finnish also doesn't have the concept of definite nor indefinite article which makes English harder to learn for Finns because it takes really long time to figure out any logical reason to use "a" or "the" for any reason. (As a Finn, I still think definite and indefinite articles are equally needless as silent letters.) As an another twist, Finnish doesn't have future tense either. It's expressed with alternative ways, such as "aion matkustaa huomenna junalla" which would be translated directly as "I have a plan to travel by train tomorrow" instead of "I'll travel by train tomorrow". All the above doesn't mean that Finnish is an easy language by any measure. When we have over 3000 different inflection forms for every verb thanks to ability to combine multiple suffixes in the base form of a verb, it'll be pretty hard to learn that when your own language has nothing similar. Thanks to inflection forms the word order is mostly stylistic choice. For example "aion matkustaa huomenna junalla" is same as (more poem-like style) "matkustaa junalla huomenna aion" or "huomenna aion matkustaa junalla" (which would put more emphasis that the travelling will be done tomorrow). As a general rule, word order is used to express emphasis and the most important thing is put in the front.
    1
  2331. 1
  2332. 1
  2333. 1
  2334. The C/C++ short, int and long are always integers that have defined minimum size and the actual size is whatever the hardware can support with maximum performance. If some hardware can process 64 bit integers faster than 16 bit or 32 bit integers, short, int and long could all be 64 bit integers. That was the theory anyway. In practice, due historical reasons compilers must use different sizes as explained in the article. The reason we have so many function call conventions is also performance. For example, x64-64 sysv calling interface is different from x86-64 MSVC calling convention because Microsoft interface has a bit worse performance because it cannot pass equally much data in registers. And because we need to have backwards compatibility as an option, practically every compiler must support every calling convention ever made, no matter how stupid the convention was from technical viewpoint. It would be trivial to declare that you use only packed structures with little endian signed 64 bit numbers but that wouldn't result in highest possible performance. And C/C++ is always about highest possible performance. Always. That said, it seems obvious in hindsight that the only sensible way is to use types such as i32, i64, u128 and call it a day. Even if you have intmax_t or time_t somebody somewhere will depend it being 64 bit and you can never ever change the type to be something else but 64 bit. It makes much more sense to just define that the argument or return value is i64 and create another API if that ever turns out to be bad decision. The cases where you can randomly re-compile a big program in C/C++ and it just works even if short, int, long, time_t and intmax_t change sizes is so rare that it's not worth making everything a lot more complex. The gurus that were able to make it all work with objects that change sizes depending on underlying hardware will be able to make it work with a single type definition file that codes optimal size for every type they really want to use.
    1
  2335. 1
  2336. 1
  2337. 1
  2338. 1
  2339. 1
  2340. 1
  2341. 1
  2342. 1
  2343. 1
  2344. 1
  2345. 1
  2346. 1
  2347. 1
  2348. 1
  2349. 1
  2350. 1
  2351. 1
  2352. 1
  2353. 1
  2354. 1
  2355. 1
  2356. 1
  2357. 1
  2358. 1
  2359. 1
  2360. 1
  2361. 1
  2362. 1
  2363. 1
  2364. 1
  2365. 1
  2366. 1
  2367. 1
  2368. 1
  2369. 1
  2370. 1
  2371. 1
  2372. 1
  2373. 1
  2374. 1
  2375. 1
  2376. 1
  2377. 1
  2378. 1
  2379. 1
  2380. 1
  2381. 1
  2382. 1
  2383. 1
  2384. 1
  2385. 1
  2386. 1
  2387. 1
  2388. 1
  2389. 1
  2390. 1
  2391. 1
  2392. 1
  2393. 1
  2394. 1
  2395. 1
  2396. 1
  2397. 1
  2398. 1
  2399. 1
  2400. 1
  2401. 1
  2402. 1
  2403. 1
  2404. 1
  2405. 1
  2406. 1
  2407. 1
  2408. 1
  2409. 1
  2410. 1
  2411. 1
  2412. 1
  2413. 1
  2414. 1
  2415. 1
  2416. 1
  2417. 1
  2418. 1
  2419. 1
  2420. 1
  2421. 1
  2422. 1
  2423. 1
  2424. Really, the only way for Tesla FSD to avoid hitting this or other deers is to correctly identify it as a deer and assume its suicidal in practice. Those animals do not understand what a car is, especially in darkness. I guess they think that a car is just a big animal that's going nearby so the best strategy is to try to stay silent. And when the car gets close enough, they try to start running assuming the car is an unknown predator trying to catch them. Unfortunately, unless the car is moving at speeds close to actual predators in nature, the run speed of the deer is not going to be enough to avoid the car bumber. If Teslas had IR cameras they could just assume that any warm target moving near the road in darkness is a deer. And only assume otherwise when AI can identify it as human or other non-deer animal. If you haven't seen an actual deer in darkness on a road, you just cannot understand how suicidal they can be. A fully grown moose would be another another story. They have so little real predators in nature that they don't mind about the cars either - they just assume that if the car tries to attack they can deal with the issue at that time and otherwise they can proceed with their original plan, wherever they were planning to do. A moose is actually easier for AI because once the AI can figure the movement vector of the moose it can pretty accurately calculate where the moose is going. Deers are the ones that have practically random movement and are really hard even for experienced human drivers.
    1
  2425. 1
  2426. 1
  2427. 1
  2428. 1
  2429. 1
  2430. 1
  2431. 1
  2432. 1
  2433. 1
  2434. 1
  2435. 1
  2436. 1
  2437. 1
  2438. 1
  2439. 1
  2440. 1
  2441. 1
  2442. 1
  2443. 1
  2444. 1
  2445. 1
  2446. 1
  2447. 1
  2448. 1
  2449. 1
  2450. 1
  2451. 1
  2452. 1
  2453. 1
  2454. 1
  2455. 1
  2456. 1
  2457. 1
  2458. 1
  2459. 1
  2460. 1
  2461. 1
  2462. 1
  2463. 1
  2464. 1
  2465. 1
  2466. 1
  2467. 1
  2468. 1
  2469. 1
  2470. 1
  2471. 1
  2472. 1
  2473. 1
  2474. 1
  2475. 1
  2476. 1
  2477. 1
  2478. 1
  2479. 1
  2480. 1
  2481. 1
  2482. 1
  2483. 1
  2484. 1
  2485. 1
  2486. 1
  2487. 1
  2488. Here in Finland thieves may still get dragged to court but I'm not sure if it's worth the effort for the society. The law enforcement task force is so heavy that getting one thief to court requires countless hours of work and then the thief may get 300 EUR fine and maybe a couple of weeks of community service in best case. At least here in Finland the thief doesn't need to pay to court fees and policy salary even when found guilty. So the thief could stole 2000 EUR worth or property and maybe is able to sell half of it and spend the money. Half the property is returned to the owners, and maybe 3000 EUR worth of salary is spent by society on police, public prosecutor, judge, legal counsel and countless of other people wasting hours no the thing. In total the thief caused direct damage of 1000 EUR to owners that lost property that couldn't be recovered and 3000 EUR to legal force making total of 4000 EUR lost for the society. The thief gets a slap on the wrists and a bit of community service which sure as hell is not going to prevent him or her being a thief in the future. So society would save approximately 2000 EUR for this kind of case *just by doing nothing*. Only when thieves get too arrogant to cause so much collateral damage that it's worth the effort for the society police should even try. A simple way to fix this would be to send bill for all the work done by the legal machinery. Then it would really hurt and the more you hide and kick back before finding quilty, the more expensive it would get for you. The nice part of court cases here in Finland is that if you're found not-guilty, you can usually collect 100% of the expenses (but no profit!) spent on the case by you. But society gets nothing.
    1
  2489. 1
  2490. 1
  2491. 1
  2492. 1
  2493. 1
  2494. 1
  2495. 1
  2496. 1
  2497. 1
  2498. 1
  2499. 1
  2500. 1
  2501. 1
  2502. 1
  2503. 1
  2504. 1
  2505. 1
  2506. 1
  2507. 1
  2508. 1
  2509. I'd guess that the firmware in the printer is stupid enough to always pump the cartridges when the printer is powered on. This is not safe with inkjet printers if the cardridge is empty because you could suck air into the print head. As customer typically want water insoluble prints, once you get air into the print head, any existing remains of ink in the print head dry up and clog the print head. As the ink is not soluble after drying, there's no way to salvage the print head after this. Now, the decision to always pump the cartridges during the power up is equally stupid stuff to Epson, who does the same thing. But in case of Epson, their firmware is not insane enough to also disable scanner part in case printing cannot be done because low empty cartridge. The solution I have? Use Epson inkjet printer with 3rd party refillable cartridges that automatically report half-full to the printer, water soluable ink and there's no problem. You can safely run the cartridge to empty because the ink is soluable. The water soluable ink from 3rd party has a sensible pricing as a bonus and still outputs original quality prints. There are two con-sides though: the ink is water soluable so you must not get any prints wet because the ink will flow again. And water soluable inks are not equally strong against fading because of UV light. So you must periodically re-print the stuff you want to keep in UV light with bright colors. Not a problem for me because my archival format is digital and paper prints are for temporary use only.
    1
  2510. 1
  2511. 1
  2512. 1
  2513. 1
  2514. 1
  2515. Your test car was too good. Running all your local town trips with 1st gear only is still more economical than most cars manufactured in the USA. Your car also had turbo which starts to get detrimental once your RPM drops below 1400–1600 rpm (I think TSI engine can run nicely with surprisingly low RPM, TDI engines seem to want at least 1600 rpm). Otherwise, gasoline engine would be most economical with full open throttle but so high gear that it cannot increase the speed at all. This is the most economical situation possible because pumping losses of the engine get higher when the throttle is closed more. (Assuming that the engine can actually burn all the fuel pumped into the cylinder which should be true unless there's some sensor malfunction.) I think you should have driven the same road in both directions twice. That would have averaged slight uphill into the results. If there's any downhill, then using higher gear will be always better for fuel economy. However, once you start to feel vibration in the cockpit, you're starting to cause higher wear to clutch and gearbox and then your total economy (including maintenance and repairs) starts to suffer. The vibration is caused why the flywheel being too light for the RPM and torque combination and the vibration will cause extra load for the clutch. Some cars have dual-mass flywheel which reduces the load to the clutch and you can drive with lower RPM without heavy vibration. The engine itself should be totally fine with any RPM between idle speed and redline. Unless there's a problem with the engine cooling system, running it for a long time at the redline should be totally fine, just stupid for your economy.
    1
  2516. 1
  2517. 1
  2518. 1
  2519. 1
  2520. 1
  2521. 1
  2522. 1
  2523. 1
  2524. 1
  2525. 1
  2526. 1
  2527. 1
  2528. 1
  2529. 1
  2530. 1
  2531. 1
  2532. 1
  2533. 1
  2534. 1
  2535. 1
  2536. 1
  2537. 1
  2538. 1
  2539. 1
  2540. 1
  2541. 1
  2542. 1
  2543. 1
  2544. 1
  2545. 1
  2546. 1
  2547. 1
  2548. 1
  2549. 1
  2550. 1
  2551. 1
  2552. 1
  2553. 1
  2554. 1
  2555. 1
  2556. 1
  2557. 1
  2558. 1
  2559. 1
  2560. 1
  2561. 1
  2562. 1
  2563. 1
  2564. 1
  2565. 1
  2566. 1
  2567. 1
  2568. 1
  2569. 1
  2570. 1
  2571. 1
  2572. 1
  2573. 1
  2574. 1
  2575. 1
  2576. 1
  2577. 1
  2578. 1
  2579. 1
  2580. 1
  2581. 1
  2582. 1
  2583. 1
  2584. 1
  2585. 1
  2586. 1
  2587. 1
  2588. 1
  2589. 1
  2590. 1
  2591. 1
  2592. 1
  2593. 1
  2594. 1
  2595. 1
  2596. 1
  2597. 1
  2598. 1
  2599. 1
  2600. 1
  2601. 1
  2602. 1
  2603. 1
  2604. 1
  2605. 1
  2606. 1
  2607. 1
  2608. 1
  2609. 1
  2610. 1
  2611. 1
  2612. 1
  2613. 1
  2614. 1
  2615. 1
  2616. 1
  2617. 1
  2618. 1
  2619. 1
  2620. 1
  2621. 1
  2622. 1
  2623. 1
  2624. 1
  2625. 1
  2626. 1
  2627. 1
  2628. 1
  2629. 1
  2630. 1
  2631. 1
  2632. 1
  2633. 1
  2634. 1
  2635. 1
  2636. 1
  2637. 1
  2638. 1
  2639. 7:20 I think one possible simulation argument could be that "The fraction of all posthuman civilizations running whole universe simulations is very close to zero". If future posthuman civilizations run simulations of evolutionary history, they might simulate one human mind at a time and simply generate all sensory information on the fly. This seems like a logical software optimization because the information you could extract from the brain behavior of ancestors should be possible to figure out by simulating just one brain and change what kind of information you allow it to have (that is, change the existing memories and sensory feedback). This would require only simulating full chemistry of one brain and then change the inputs it can receive during the simulation. Humans have very poor I/O bandwidth so it should be easy to simulate all possible inputs for a single human. And since you can fake the experienced time inside the simulation, you could pause the simulation when it's about to go out of known domain and use some superhuman processing to calculate sensory feedback that the simulated human should experience for that situation. That would easily explain why we cannot create any physics experiment to show that we're inside a simulation – any idea we have, the entity running the simulation could just decide to output the results that appear to demonstrate real universe to the simulated mind. And since it's probably only a selected historicians who would be running this kind of simulations, the resulting humans in simulation would be close to zero relative to the history of humans. (It seems that this was later discussed around 25:25.)
    1
  2640. 1
  2641. 1
  2642. 1
  2643. 1
  2644. 1
  2645. 1
  2646. 1
  2647. 1
  2648. 1
  2649. 1
  2650. 1
  2651. 1
  2652. 1
  2653. I fully agree that comments/documentation should be considered mandatory for any code that's supposed to live long – that is, maintained and developed further. However, I don't believe in commenting individual lines but whole functions/methods. My rule of thumb is that if a method is public (usable by external code) it should have documentation/promise about what it does and basically state Eiffel-like design-by-contract about the supported inputs. Whether you write it as docblocks above the method implementation or in form of automated tests really doesn't make a huge difference but you should have clear documentation about what the code is supposed to do. That way you can figure out if the implementation actually matches the original intent when you later need to modify the code. Without documentation you cannot know if handling of some specific edge case is intentional or a bug in implementation. I prefer docblock-style comments in mostly English but I'm getting more and more strict about having to declare if any input parameter and results are trusted data or not. All input (user generated data, files, network sockets, config files) should be considered untrusted and anything directly computed from untrusted data shall be considered tainted, too, and as such, untrusted data, too. If you write all code like this, you end up having a lot less security issues in your code. And for all input and output string values, you have to declare the encoding in the documentation. The input encoding might be untrusted UNICODE string and output could be trusted HTML text fragment – in that case the implementation must encode all the HTML metacharacters or there's a bug in the implementation. Without a docblock you cannot know if that's intented or not. That said, private methods (in case of class/object oriented programming) do not need to have any documentation because those are just part of the implementation. I also don't think automated unit tests should even bother testing private methods directly but just the behavior or any public methods. I'm on a borderline if even protected methods should be tested with unit tests – I'm currently thinking that if no public method actually uses any private or protected method, those methods are just dead code and should be deleted instead of writing unit tests. In the end, when I write some code and a team member needs to ask me about the implementation (during code review or later) then I usually end up fixing the implementation to be more readable. In that case I believe in "self-documenting code" that I only use comments within the method as the last resolt – it's much better to write understandable implementation that can be fully understood without comments within the function/method body.
    1
  2654. 1
  2655. 1
  2656. 1
  2657. 1
  2658. 1
  2659. 1
  2660. 1
  2661. 1
  2662. 1
  2663. 1
  2664. 1
  2665. 1
  2666. 1
  2667. 1
  2668. 1
  2669. 1
  2670. 1
  2671. 1
  2672. 1
  2673. 1
  2674. 1
  2675. 1
  2676. 1
  2677. 1
  2678. 1
  2679. 1
  2680. 1
  2681. 1
  2682. 1
  2683. 1
  2684. 1
  2685. 1
  2686. 1
  2687. 1
  2688. 1
  2689. 1
  2690. 1
  2691. 1
  2692. 1
  2693. 1
  2694. 1
  2695. 1
  2696. 1
  2697. 1
  2698. 1
  2699. 1
  2700. 1
  2701. 1
  2702. 1
  2703. 1
  2704.  @alemz_music  I don't believe AI would attack humans. I think we can agree that human intelligence is superior to bears, dogs, cats or even chimpanzee. Still we don't try to actively attack those entities. Humans have attacked some species and killed them to extinction for stupid reasons such as trophy hunting. Superhuman AI should be able to notice that symbiotic life with humans would be a better solution than declaring a war. And superhuman AI should be intelligent enough to not to start collecting trophy kills. Much bigger risk seems to be that superhuman AI can create such a great entertainment that humans will be entertained to extinction. Imagine totally real feeling VR system where everything is nicer and more gratifying that anything in real world. Would you see the possibility that humans would want to spend so much time there that they don't bother having physical sexual intercourse and actually raising children? Remember that full body VR system would allow you to enjoy all the best parts of sex and interacting with (virtual) children without any of the con-sides. It seems to me that people are selfish enough that many would entertain themselves in VR environment instead of doing the extra effort to do stuff in real world. (And I'm not talking about current VR experiences, something more akin to stuff in the movie Matrix, without the human battery stuff.) Note that superhuman AI doesn't really need to be afraid of humans attacking it. Are you afraid that chimpanzees or cats are suddenly going to take over and kill all the humans? Note that superhuman AI wouldn't even need to care about issues like global warming – that's a problem for humans only and if humans decided to take no action, superhuman AI doesn't need to enforce it either. Even if it were the best for humankind.
    1
  2705. 1
  2706. 1
  2707. 1
  2708. 1
  2709. 1
  2710. 1
  2711. 1
  2712. 1
  2713. 1
  2714. 1
  2715. 1
  2716. 1
  2717. 1
  2718. 1
  2719. 1
  2720. 1
  2721. 1
  2722. 1
  2723. 1
  2724. 1
  2725. 1
  2726. 1
  2727. 1
  2728. 1
  2729. 1
  2730. 1
  2731. 1
  2732. 1
  2733. 1
  2734. 1
  2735. 1
  2736. 1
  2737. 1
  2738. 1
  2739. 1
  2740. 1
  2741. 1
  2742. 1
  2743. 1
  2744. 1
  2745. 1
  2746. Great video! I think it basically boils down how religious you're. If you look at things just from the scientific view, it should be obvious that artifial womb is something that we should try to create. And I strongly believe that we'll get that technology within our lifetime because I'm currently thinking that we'll get superhuman AI within our lifetime and that will be able to solve the artificial womb technology even if we cannot do that. The bigger ethical problem is what kind of spare parts it's okay to grow. Again, if you're religious, the answer is nearly none. And for atheists, the answer would be any, including the whole body. That will cause huge conflicts in the next couple of decades. All that said, in long run we should decide if we still want to evolution in homo sapiens. If we allow any born people to keep living practically forever and reproduce no matter how bad their genes are, the results will be pretty poor in long run. We already have the technology to prevent any evolutionary effects on human race – the question is if we want to use that in grand scale. For example, from evolutionary standpoint, women should have wider hips even through current fashion trends may suggest otherwise. As such, we should prevent women with narrow hips from reproducing that much because we've removed the evolutionary pressure from dying to giving birth because of having too narrow hips. However, it doesn't seem very popular that you shouldn't have kids because that would be too dangerous without modern medicine. Instead of sterilization, we allow such people to reproduce and carry the problematic genes forward. If we keep doing this for long enough, artificial womb is the only solution we have.
    1
  2747. 1
  2748. 1
  2749. 1
  2750. 1
  2751. 1
  2752. 1
  2753. 1
  2754. 1
  2755. 1
  2756. 1
  2757. 1
  2758. 1
  2759. 1
  2760. 1
  2761. 1
  2762. 1
  2763. 1
  2764. 1
  2765. 1
  2766. 1
  2767. 1
  2768. 1
  2769. 1
  2770. 1
  2771. 1
  2772. 1
  2773. 1
  2774. 1
  2775. 1
  2776. 1
  2777. 1
  2778. 1
  2779. 1
  2780. 1
  2781. 1
  2782. 1
  2783. 1
  2784. 1
  2785. 1
  2786. 1
  2787. 1
  2788. 1
  2789. 1
  2790. This whole design seems like a copy of VW door latch setup. For some reason, VW desided in late 1990s that the doors should not have visible switch so they embedded a small microswitch inside the door latch mechanism. The microswitch is really poor quality and probably every VW car manufactured during two last decades have had at least one microswitch failure. The switch itself costs maybe $1.50 but you're supposed to replace the whole latch every time the switch fails which costs maybe $70 and requires hours of work with the panels and stuff. If only they had used more robust $3 microswitch and this design would have been accetable. But manufacturer wanted to save $1.50 per door and caused multiple hundreds worth of extra expenses in billable hours when the cheap microswitch fails. And some cars require that the window is some specific predefined height (not fully open or fully closed) for the door panels to be extracted without trouble. Lots of fun if you're taking the panels off to repair the window movement mechanism! And obviously all the connectors are designed only for the initial factory build. It makes zero difference to designers how hard the car is to repair in the future because that's additional billable work from the customer that already purchased the car. I would much prefer functional design where the mechanism can be seen if it improves reliability or repairability. Modern cars are more like Apple devices where the customer is supposed to pretend that there are no screws anywhere so either you hide all the screws in imaginative ways or glue everything in place. Glueing is getting more and more commont these days, unfortunately. See smartphones and laptops for worst examples.
    1
  2791. 1
  2792. 1
  2793. 1
  2794. 1
  2795. 1
  2796. 1
  2797. 1
  2798. 1
  2799. 1
  2800. 1
  2801. 1
  2802. 1
  2803. 1
  2804. 1
  2805. 1
  2806. 1
  2807. 1
  2808. 1
  2809. 1
  2810. 1
  2811. 1
  2812. 1
  2813. 1
  2814. 1
  2815. 1
  2816. 1
  2817. 1
  2818. 1
  2819. 1
  2820. 1
  2821. 1
  2822. 1
  2823. 1
  2824. 1
  2825. 1
  2826. Coding is already ultimately just prompt engineering. The current "AI" system we have to actually create the software are typically called compilers and the prompt is called source code. And because existing systems are so primitive, prompting those to output an usable software is really hard, hence the need for pro software developers. Future AI-based compilers may be able to understand instructions that are at or near the level of average human communication. And if such future AI can generate the resulting software rapidly and for cheap, it doesn't even matter if normal people fail to communicate their needs at first because rewriting pieces of software will be so cheap that it doesn't matter if there are misunderstandings and creating software that will be thrown away immediately when it has been made. The reason great human software developers work so hard to truly understand the needs of the end user before writing the code is because they want to avoid wasting work. If work is next to free, normal people can just iterate the full software and generate the spec by telling AI to replace the incorrectly guessed parts until the resulting software is deemed good enough for them. It all boils down to communication. The party with money is trying to communicate what they want and the current way to creating software is definitely a compromise because software development is so expensive right now. And most software ever done is broken in every imaginable way and just barely works well enough to be usable. Before AI, I was thinking that there will be always programming work available because we cannot ever fix even all the existing software for real.
    1
  2827. 1
  2828. 1
  2829. 1
  2830. 1
  2831. 1
  2832. 1
  2833. Great video again! I would suggest a small improvement for future videos: I think it would be easier for the viewer if you repeated the year of the accident later in the video, especially in the part where you discuss how things have changed since the accident. This would make it easier to understand the progress that has been made in aviation safety and also underline how many years it takes to fully implement all the changes recommended in the report, if those recommendations are actually implemented industry-wide at all. And I fully agree that plain 1500 hour limit does very little to improve safety. I would rather have a count of correctly handled incidents in simulators. For example, different scenarios in the sim training could give different points for your pilot skill credit: safely landing a plane in heavy side-wind without working ILS might give you 1 point. And safely landing a plane in low visibility with windshear and an engine failure would give you 20 points. And of course, you wouldn't be told before the sim training what will happen outside the information you would have for any real flights either. And I would give points for correctly diagnosing any failures in the sensors, instrument or flight surface movement. Perhaps sim training should also include situations where the cockpit has been pre-configured before the tested pilot has entered the cockpit and the actual test starts at cruise flight and the pilot must correctly find some mistakes in the pre-configured state to safely land the plane? That would train the pilots to assume unknown mistakes or problems in the aircraft and always re-check things instead of assuming everything is okay.
    1
  2834. 1
  2835. 1
  2836. 1
  2837. 1
  2838. 1
  2839. 1
  2840. 1
  2841. 1
  2842. 1
  2843. 1
  2844. 1
  2845. 1
  2846. 1
  2847. 1
  2848. 1
  2849. 1
  2850. 1
  2851. 1
  2852. 1
  2853. 1
  2854. 1
  2855. 1
  2856. 1
  2857. 1
  2858. 1
  2859. 1
  2860. 1
  2861. 1
  2862. 1
  2863. 1
  2864. 1
  2865. 1
  2866. 1
  2867. 1
  2868. 1
  2869. 1
  2870. 1
  2871. 1
  2872. 1
  2873. 1
  2874. 1
  2875. 1
  2876. 1
  2877. I'm worried that social unstability is going happen sooner than politicians figure out that we should switch to UBI (Universal Basic Income) before AI and robots take majority of the jobs. We still have no idea how the few skilled humans that can do things that AI cannot (yet) do should be rewarded to keep working. If majority of the people can use their time as they see fit thanks to UBI, what kind of resources the society can provide for the skilled individuals to keep them working? It cannot be simply a bit above average salary because then it wouldn't be worth their time. Right now, the competition is between minimum pay and well paid job. When AI has taken over majority of the works, it's UBI + do whatever you like vs do the hardest possible work and have very little free time. And if you have very little free time, you cannot use the money from your work that well either. It might well turn out that the highly skilled people will ask for higher salary than nowadays and still work less hours because otherwise 100% free time + doing whatever you like might feel like the better option. Of course, one way to combat this would be the set the monthly UBI payments so low that you can barely live on UBI, which would go against the idea of UBI. In long run, it should be assumed that majority of the humans cannot do any of the jobs majority of the people are currently doing. Only highly skilled individuals able to do things that AI cannot (yet) done or individuals that do the job for cheaper than an AI or robot, can keep their jobs.
    1
  2878. 1
  2879. 1
  2880. 1
  2881. 1
  2882. 1
  2883. 1
  2884. 1
  2885. 1
  2886. 1
  2887. 1
  2888. 1
  2889. 1
  2890. 1
  2891. 1
  2892. 1
  2893. 1
  2894. 1
  2895. 1
  2896. 1
  2897. 1
  2898. 1
  2899. 1
  2900. 1
  2901.  @Bri-bn5kt  The problem with blocking EU visitors is that you cannot geoblock but you have to ask each visitor if they are an EU citizen (maybe just living in the USA!) and if they are an EU citizen and you don't want to follow GPDR, you have boot them from your your server. The GDPR legislation affects you if an EU citizen uses your service no matter where on Earth that said citizen is using your service. Most business think it's better strategy to be compliant with GDPR – it doesn't ask for a lot, honestly. Basically you cannot collect any personally identifiable information without a proper legal reason. Collecting personally identifiable information to increase your analytics and marketing is not a legal reason. It would be easier for you to behave that way but that's not a strict requirement. And GDPR is about if that's not strictly required, you shall not collect personally identifiable information. It's okay to collect truly anonymous statistics but it's not okay to start building user profiles unless they have created an user account and given the consent for collecting data. And the user account is a practical requirement, not a specific requirement because all users must be given option to withdraw their consent on any given moment and to do that you must have some kind of user accounts so that you know which user account has withdrawn their consent. The user account may or may not have user visible login and password. The UK legislation about blindly banning cookies is another story. The EU GDPR doesn't prevent using cookies. You can use cookies just fine if they are needed for e.g. keeping track of user sessions (e.g. to avoid CSRF attack) or user set preferences (e.g. content language). However, the very same cookies (literally the same data on the cookie and the same cookie name) is illegal if you track users to extract information about what they seem to like most.
    1
  2902. 1
  2903. 1
  2904. 1
  2905. 1
  2906. 1
  2907. 1
  2908. 1
  2909. 1
  2910. 1
  2911. 1
  2912. 1
  2913. 1
  2914. 1
  2915. 1
  2916. 1
  2917. 1
  2918. 1
  2919. 1
  2920. 1
  2921. 1
  2922. 1
  2923. 1
  2924. 1
  2925. 1
  2926. 1
  2927. 1
  2928. 1
  2929. 1
  2930. 1
  2931. 1
  2932. 1
  2933. 1
  2934. 1
  2935. 1
  2936. 1
  2937. 1
  2938. 1
  2939. 1
  2940. 1
  2941. 1
  2942. 1
  2943. 1
  2944. 1
  2945. 1
  2946. 1
  2947. 1
  2948. 1
  2949. 1
  2950. 1
  2951. 1
  2952. 1
  2953. 1
  2954. 1
  2955. 1
  2956. 1
  2957. 1
  2958. 1
  2959. 1
  2960. 1
  2961. 1
  2962. 1
  2963. 1
  2964. I do write answers to Stackoverflow and I expect the new users to RTFM. However, I rarely downvote bad questions unless it's spam or something else obviously malicious, I simply ignore them. If I downvote a question, I always write situation specific explanation for it to help the human being asking a bad question to understand the problem. Linking to generic "wrong type of question" page is just bad in my books unless the content is obviously malicious. And even in that case, the correct action is to flag the question for admins, not to downvote it. That said, if you cannot bother to read the documentation provided to you before you ask the first question, do not expect other people to bother to interact with you either. The instructions clearly explain that you should tell what you've already done and how it has failed. The intent of the site is not to do stuff for you – you have to demonstrate your work first and then ask other to help with the mistake you fail to see. If you don't want to bother to read or work, do not expect free support. I participate to StackOverflow because I believe in teaching things to other people and it makes me better communicator in general. I feel that I can explain things better to my collegues once I have learned to teach other people on StackOverflow. My reputation is around 15k which should give some idea how much I've used it. And I think StackOverflow does have too strict rules about what kind of questions are acceptable. I've been downvoted for real questions as being too opinion based. Most programming tasks are some kind of compromise and it would be valueable to explain what each developer would choose and why. Even if the why is opinion, not a perfect statistical fact.
    1
  2965. 1
  2966. 1
  2967. 1
  2968. 1
  2969. 1
  2970. 1
  2971. 1
  2972. 1
  2973. 1
  2974. 1
  2975. 1
  2976. 1
  2977. 1
  2978. 1
  2979. 1
  2980. 1
  2981. 1
  2982. 1
  2983. 1
  2984. 1
  2985. 1
  2986. 1
  2987. 1
  2988. 1
  2989. 1
  2990. 1
  2991. 1
  2992. 1
  2993. 1
  2994. 1
  2995. 1
  2996. 1
  2997. 1
  2998. 1
  2999. 1
  3000. 1
  3001. 1
  3002. 1
  3003. 1
  3004. 1
  3005. 1
  3006. Increasing hours just makes things worse. Here in Finland, children are usually in the school for 6 hours a day and 5 days a week. And for those 6 hours, they get 15 minute break every hour so it's really 6 x 45 minutes per day from monday to friday. With 2.5 month holiday for the summer and 1 week holiday for the autumn, 2 week for the xmas, and 1 week holiday for the spring. Homework needs maybe 15-30 minutes per day and that's all. And Finland used to have really good results in the PISA tests around year 2005. The curriculum details have since changed a bit and results are a bit worse but the length of education or the amount of homework hasn't been changed. I'm not sure if the worse behavior is caused by minor curriculum changes or the use of smartphones in classrooms –in addition, one big change in Finland since 2005 has been integrating special education into normal classrooms without extra resources for the teacher of the normal classroom. I would guess this might be the real cause for worse results in recent years. It was done on the basis that it would improve understanding between "normal" people and "special" people but schools assumed it was cost minimization technique and just reduced staff instead of having one normal teacher and one special ed teacher per class. In practice, the normal teachers were expected to do everything they used to do and, in addition, do all the stuff the special ed teachers did previously. The minimum education for teachers in Finland is Master's degree from university, which obviously helps, too. And Finland doesn't have any private schools in practice and parents don't get to choose the school children go because all the schools follow the same curriculum with similar requirements for teachers.
    1
  3007. 1
  3008. 1
  3009. 1
  3010. 1
  3011. 1
  3012. 1
  3013. 1
  3014. 1
  3015. 1
  3016. 1
  3017. 1
  3018. 1
  3019. 1
  3020. 1
  3021. 1
  3022. 1
  3023. 1
  3024. 1
  3025. 1
  3026. 1
  3027. 1
  3028. 1
  3029. 1
  3030. 1
  3031. 1
  3032. 1
  3033. 1
  3034. 1
  3035. 1
  3036. 1
  3037. 1
  3038. 1
  3039. 1
  3040. 1
  3041. 1
  3042. 1
  3043. 1
  3044. 1
  3045. 1
  3046. 1
  3047. 1
  3048. 1
  3049. 1
  3050. 1
  3051. 1
  3052. 1
  3053. 1
  3054. 1
  3055. 1
  3056. 1
  3057. 1
  3058. 1
  3059. 1
  3060. 1
  3061. 1
  3062. 1
  3063. 1
  3064. 1
  3065. 1
  3066. 1
  3067. 1
  3068. 1
  3069. 1
  3070. 1
  3071. 1
  3072. 1
  3073. 1
  3074. 1
  3075. 1
  3076. 1
  3077. 1
  3078. 1
  3079. 1
  3080. 1
  3081. 1
  3082. 1
  3083. 1
  3084. 1
  3085. 1
  3086. 1
  3087. 1
  3088. 1
  3089. 1
  3090. 1
  3091. 1
  3092. 1
  3093. 1
  3094. 1
  3095. 1
  3096. 1
  3097. 1
  3098. 1
  3099. 1
  3100. 1
  3101. 1
  3102. 1
  3103. 1
  3104. 1
  3105. 1
  3106. 1
  3107. 1
  3108. 1
  3109. 1
  3110. 1
  3111. 1
  3112. 1
  3113. 1
  3114. 1
  3115. 1
  3116. 1
  3117. 1
  3118. 1
  3119. 1
  3120. 1
  3121. 1
  3122. 1
  3123. 1
  3124. 1
  3125. 1
  3126. 1
  3127. 1
  3128. 1
  3129. 1
  3130. 1
  3131. 1
  3132. 1
  3133. 1
  3134. 1
  3135. 1
  3136. 1
  3137. 1
  3138. 1
  3139. 1
  3140. 1
  3141. 1
  3142. 1
  3143. 1
  3144. 1
  3145. 1
  3146. I partially agree the John Deere line with emission controls. Diesel engines do emit sooth particles and NOx emissions without special tweaks. I'd argue that sooth particles do not matter a bit on rural areas because it's only carbon and is not a problem in small amounts per area. As such, DPF can safely be bypassed. NOx emissions are problematic no matter where it's emitted around the globe so I agree that deleting NOx emission controls is a bad thing in all cases. However, that doesn't mean that engine control unit couldn't clear the error codes from the user interface. The most important NOx emission control system is EGR and the system can figure out in maybe 10 seconds if it doesn't work while the engine is running. So allow do clearing the codes but maybe reduce the engine power when EGR system has failed. That would allow completing the workday with the tractor but would put some incentive to actually fix the EGR. (Usually this requires replacing the EGR valve and I think a competent farmer could do that in 15 minutes unless John Deere has even worse design than VW car diesel engines which I assume is unlikely.) As for the software, the problem is not the required software but the protocol between the computer and the tractor. Again, speaking with experience with VW car software, there VW has a secret protocol handshake which is required until you can speak to all controllers. Other than that, the interface follows publicly available standards with different settings channels (basically memory addresses to software engineer) and values for those channels (basically memory values to software engineer). If these protocol handshakes and channels were publicly documented, there would be no need for the OEM software. Basically the required info is along the lines: "Engine Control Unit has identifier 01, the subchannel 13 is the turbo boost, unit is absolute mbar as integer". If you want to monitor real time turbo boost, you connect to the CAN BUS (the standard part of this whole system), do the SECRET handshake, connect to unit 01 and select channel 13 and read the value. If it says e.g. 1325 you know that the current boost pressure is 325 mbar above the atmosphere or about 4.7 psi boost pressure for US readers. Reading through all channels would allow creating a backup of the current configuration and if the owner then messes something up while tuning the control unit, all values could just be restored from backup. For VW, Audi, Skoda and Seat (all manufactured by VAG), you can use software called VCDS by Rosstech where the founder of Rosstech reverse engineered the secret handshake and created business by selling MUCH cheaper software than VAG to do every thing that the OEM programming unit can do, too. And the VCDS has superior user interface compared to official unit. As such, Rosstech is doing much better work than VAG. However, if the protocol (including the secret handshake) were public then we would see much more software developers creating tools to program VAG cars. Currently we have the official VAG tools and Rosstech VCDS, but I think company called OBDeleven has also completed the reverse engineering part to create their own tools. In the end, the secret part of the protocol DOES NOT prevent 3rd party developers from creating software, it only raises the bar to do so and increases costs for all customers. I think John Deere should be totally okay with setup where user must unlock their tractor by requesting serial number specific unlock code to remove all secret handshakes and control unit locks. John Deere could require user to accept that warranty is void if the unlock is completed. Some Android manufacturers already do this. For example, to unlock any Sony smartphone, just follow the official instructions at https://developer.sony.com/develop/open-devices/get-started/unlock-bootloader/ - there's absolute no reason why tractors or cars couldn't work the same.
    1
  3147. 1
  3148. 1
  3149. 1
  3150. 1
  3151. 1
  3152. 1
  3153. 1
  3154. Sorry, I have to press the dislike button on this one due the lack facts even though your video production is otherwise nice. You are missing three huge elephants in the room for Hydrogen cars: (1) The efficiency for electricity -> hydrogen -> fuel cell -> electricity is much worse than electricity -> li-ion battery -> electricity which is used in pure electrical vehicles. When you have reneweable electricity source, it makes much more sense to load li-ion batteries than waste lots of energy on electricity -> hydrogen conversion because that produces lots of heat as waste. (2) The fuel tanks for hydrogen are both prohibitively expensive and weight way too much. Hydrogen also cannot be compressed very much. One liter of gasoline contains more hydrogen than 1L of pure hydrogen as a liquid in the hydrogen tank. And the gasoline also has lots of carbon in addition on that same volume. (3) Hydrogen economy would require A LOT more hydrogen stations to make any sense. However, there's no sense to invest to build any new hydrogen stations because issues 1 and 2 on this list. The only exception is local politics which may dump enough money to build the stations even though there will not be any customers in the long run. Elon Musk is right on this one even though he fails to clearly explain why. TL;DR: Hydrogen cars would be slightly better than gasoline or diesel cars. However, hydrogen cars are MUCH worse than pure electronic cars which is why there's no sense to build any hydrogen cars.
    1
  3155. 1
  3156. 1
  3157. 1
  3158. 1
  3159. 1
  3160. 1
  3161. 1
  3162. 1
  3163. 1
  3164. 1
  3165. 1
  3166. 1
  3167. 1
  3168. 1
  3169. 1
  3170. 1
  3171. 1
  3172. 1
  3173. 1
  3174. 1
  3175. 1
  3176. 1
  3177. 1
  3178. 1
  3179. 1
  3180. 1
  3181. 1
  3182. 1
  3183. 1
  3184. 1
  3185. 1
  3186. 1
  3187. 1
  3188. 1
  3189. 1
  3190. 1
  3191. 1
  3192. 1
  3193. 1
  3194. 1
  3195. 1
  3196. 1
  3197. 1
  3198. 1
  3199. 1
  3200. 1
  3201. 1
  3202. 1
  3203. 1
  3204. 1
  3205. 1
  3206. 1
  3207. 1
  3208. 1
  3209. 1
  3210. 1
  3211. 1
  3212. 1
  3213. 1
  3214. 1
  3215. 1
  3216. 1
  3217. 1
  3218. 1
  3219. 1
  3220. 1
  3221. 1
  3222. 1
  3223. 1
  3224. 1
  3225. 1
  3226. 1
  3227. 1
  3228. 1
  3229. 1
  3230. 1
  3231. 1
  3232. 1
  3233. 1
  3234. 1
  3235. 1
  3236. 1
  3237. 1
  3238. 1
  3239. 1
  3240. 1
  3241. 1
  3242. 1
  3243. 1
  3244. 1
  3245. 1
  3246. 1
  3247. 1
  3248. 1
  3249. 1
  3250. 1
  3251. 1
  3252. 1
  3253. 1
  3254. 1
  3255. 1
  3256. 1
  3257. 1
  3258. 1
  3259. 1
  3260. 1
  3261. 1
  3262. 1
  3263. 1
  3264. 1
  3265. 1
  3266. 1
  3267. 1
  3268. 1
  3269. 1
  3270. 1
  3271. 1
  3272. 1
  3273. 1
  3274. 1
  3275. 1
  3276. 1
  3277. 1
  3278. 1
  3279. 1
  3280. 1
  3281. 1
  3282. 1
  3283. 1
  3284. 1
  3285. 1
  3286. 1
  3287. 1
  3288. 1
  3289. 1
  3290. 1
  3291. 1
  3292. 1
  3293. 1
  3294. 1
  3295. 1
  3296. 1
  3297. 1
  3298. 1
  3299. 1
  3300. 1
  3301. 1
  3302. 1
  3303. 1
  3304. 1
  3305. 1
  3306. 1
  3307. 1
  3308. 1
  3309. 1
  3310. 1
  3311. 1
  3312. 1
  3313. 1
  3314. 1
  3315. 1
  3316. 1
  3317. 1
  3318. 1
  3319. 1
  3320. 1
  3321. 1
  3322. 1
  3323. 1
  3324. 1
  3325. 1
  3326. 1
  3327. 1
  3328. 1
  3329. 1
  3330. 1
  3331. 1
  3332. 1
  3333. 1
  3334. 1
  3335. 1
  3336. 1
  3337. 1
  3338. 1
  3339. 1
  3340. 1
  3341. 1
  3342. 1
  3343. 1
  3344. 1
  3345. 1
  3346. 1
  3347. 1
  3348. 1
  3349. 1
  3350. 1
  3351. 1
  3352. 1
  3353. 1
  3354. 1
  3355. 1
  3356. 1
  3357. 1
  3358. 1
  3359. 1
  3360. 1
  3361. 1
  3362. 1
  3363. 1
  3364. 1
  3365. 1
  3366. 1
  3367. 1
  3368. 1
  3369. 1
  3370. 1
  3371. 1
  3372. 1
  3373. 1
  3374. 1
  3375. 1
  3376. 1
  3377. 1
  3378. 1
  3379. 1
  3380. 1
  3381. 1
  3382. 1
  3383. 1
  3384. 1
  3385. 1
  3386. 1
  3387. 1
  3388. 1
  3389. 1
  3390. 1
  3391. 1
  3392. 1
  3393. 1
  3394. 1
  3395. 1
  3396. 1
  3397. 1
  3398. 1
  3399. 1
  3400. 1
  3401. 1
  3402. 1
  3403. 1
  3404. 1
  3405. 1
  3406. 1
  3407. 1
  3408. 1
  3409. 1
  3410. 1
  3411. 1
  3412. 1
  3413. 1
  3414. 1
  3415. 1
  3416. 1
  3417. 1
  3418. 1
  3419. 1
  3420. 1
  3421. 1
  3422. 1
  3423. 1
  3424. 1
  3425. 1
  3426. 1
  3427. 1
  3428. 1
  3429. 1
  3430. 1
  3431. 1
  3432. 1
  3433. 1
  3434. 1
  3435. 1
  3436. 1
  3437. 1
  3438. 1
  3439. 1
  3440. 1
  3441. 1
  3442. 1
  3443. 1
  3444. 1
  3445. 1
  3446. 1
  3447. 1
  3448. 1
  3449. 1
  3450. 1
  3451. 1
  3452. 1
  3453. 1
  3454. 1
  3455. 1
  3456. 1
  3457. 1
  3458. 1
  3459. 1
  3460.  @Josh-df1oj  Such problems are typically either one of the capacitors on the motherboard going bad or PSU starting to fail. Or bad contacts between some metal parts (e.g. RAM or CPU) as a result of thermal expansion and oxidation. I'd try reseating the RAM and if that doesn't work, see if your BIOS supports CPU or RAM voltage offsets. For example, if you dial +0.1V voltage offset for the CPU and RAM, that should get the system working again if the problem is slightly too low voltage during current spikes. If that doesn't change anything, reset the offset to zero again because increasing the voltage without a reason should be avoided because it will raise system temperatures a bit. The problem with bad caps (which are wear items and will always fail sooner or later!) is that voltage regulation can no longer handle current spikes equally good to new state. High end motherboards typically have way more capacitance than originally required so those may be able to still work with a few caps gone bad but low quality motherboards typically require every capacitor to work as expected. Raising the voltages a bit (positive voltage offset in BIOS) makes the motherboard always overshoot the target voltage a bit with the hope that when the voltage drops during current spike the motherboard can still output enough voltage to keep the system running correctly. (That is, the motherboard cannot keep up the voltage offset requested but the actual voltage doesn't drop below the spec voltage thanks to dipping only below the higher than spec'd voltage offset.)
    1
  3461. 1
  3462. 1
  3463. 1
  3464. 1
  3465. 1
  3466. 1
  3467. 1
  3468. 1
  3469. 1
  3470. 1
  3471. 1
  3472. 1
  3473. 1
  3474. 1
  3475. 1
  3476. 1
  3477. 1
  3478. 1
  3479. 1
  3480. 1
  3481. 1
  3482. 1
  3483. 1
  3484. 1
  3485. 1
  3486. 1
  3487. 1
  3488. 1
  3489. 1
  3490. 1
  3491. 1
  3492. 1
  3493. 1
  3494. 1
  3495. 1
  3496. 1
  3497. 1
  3498. 1
  3499. 1
  3500. 1
  3501. 1
  3502. 1
  3503. 1
  3504. 1
  3505. 1
  3506. 1
  3507. 1
  3508. 1
  3509. 1
  3510. 1
  3511.  @RobBCactive  Sure, the only way in long run is to have accurate API definition in machine readable form. Currently if you use the C API, you "just have to know" that it's your responsibility to do X and Y if you ever call function Z. Unless we have machine readable definition (be it in Rust or any other markup) there's no way to automate the verification that the code is written correctly. It seems pretty clear that many kernel developers have taken the stance that they will not accept machine readable definitions in Rust syntax. If so, they need to be willing to have the required definitions with at least some syntax. As things currently stand, there are no definitions for lots of stuff and other developers are left guessing if a given part of the existing implementation is "the specification" or just a bug. If C developers actually want that the C implementation is literally the specification, that is, the bugs are part of the current specification, too, they just need to say that aloud. Then we can discuss if that idea is worth keeping in long run. Note that if we had machine readable specification in whatever syntax, the C API and Rust API could be automatically generated from that specification. If that couldn't be done then that specification is not accurate enough. (And note that such specification would only define the API, not the implementation. But such API definition would need to define responsibilities about doing X or Y after calling Z which C syntax cannot do.)
    1
  3512.  @RobBCactive  Do you agree that if we have a function like iget_locked() and after calling that function you MUST do something with the data or the kernel will enter in corrupted state, this behavior is part of the kernel API? Now, do you agree that C cannot represent this requirement? If you agree both previous points, then you must also agree that there cannot be even in theory a compiler that can catch a programming error where the programmers fails to follow this API (assuming we only have C code as machine readable input data). Rust people are trying to say that quality of the kernel would improve if we had a compiler that can catch errors of this kind. And Rust compiler can already do this if you encode the API information in Rust syntax using types to represent the API behavior. And it seems that Linus agrees and that's why Rust was accepted into kernel despite the fact that the Rust syntax has much higher learning curve than C. The C developers that want to downplay Rust are basically arguing that either (1) there's no need to catch programming errors automatically, or (2) having to write the exact requirements would be too expensive so it's not worth the effort. Which camp do you belong to? I'm personally definitely against (1) because I see kernel level security vulnerabilities and driver crashes way too often. And about (2) I'm not that sure. I think it's worth the effort to try it to improve the quality of the kernel. And the reason to try to use it for filesystem interfaces is because the filesystems are so critical for data safety. If your GPU crashes every now and then, that's non-optimal. If your filesystem randomly corrupts data when threads access the shared data incorrectly or some piece of code does double free, that's a really bad day and hopefully you had backups.
    1
  3513. 1
  3514. 1
  3515. 1
  3516. 1
  3517. 1
  3518. 1
  3519. 1
  3520. 1
  3521. 1
  3522. 1
  3523. 1
  3524. 1
  3525. 1
  3526. 1
  3527. 1
  3528. 1
  3529. 1
  3530. 1
  3531. 1
  3532. 1
  3533. 1
  3534. 1
  3535. 1
  3536. 1
  3537. 1
  3538. 1
  3539. 1
  3540. 1
  3541.  @masskiller9206  That's why you go with aftermarket cartridges with Epsons. The cardridges I have automatically reset the ink counter back to full when it goes under 50%. The only issues this causes is that the ink display on the screen is not accurate so I have to monitor the ink levels manually. There's an actually sensible reason to refuse to print if any color is empty: that's because the default inks are pigment based and the print head will get clogged if print head is moved out of its resting position multiple time without enough ink in the holes in the print head. And the way the print head is built, the printer will need to apply some suction to it every now and then during the printing. If you use water soluable dye inks, you don't need to be afraid about print head getting clogged but the prints are not waterproof. The default Epson ink is so waterproof that I haven't found any solvent that's able to restore badly clogged print head. With water soluable dye inks, in worst case I've had to push maybe half a cardridge worth of ink through to get perfect test patterns. If you want waterproof prints, you should use laser or sublimation printer, not an inkjet. I'm still wondering why nobody makes a printer that has both laser and inkjet hardware. Most of the cost of the modern printer is the mechanical parts that are shared with both technologies and after that the most expensive part is the print head. So if you're willing to pay for high quality inkjet, including the extra parts for laser printing shouldn't increase the costs that much.
    1
  3542. 1
  3543. 1
  3544. 1
  3545. 1
  3546. 1
  3547. 1
  3548. 1
  3549. 1
  3550. 1
  3551. 1
  3552. 1
  3553. 1
  3554. 1
  3555. 1
  3556. 1
  3557. 1
  3558. 1
  3559. 1
  3560. 1
  3561. 1
  3562. 1
  3563. 1
  3564. 1
  3565. 1
  3566. 1
  3567. 1
  3568. 1
  3569. 1
  3570. 1
  3571. 1
  3572. 1
  3573. 1
  3574. 1
  3575. 1
  3576. 1
  3577. 1
  3578. 1
  3579. 1
  3580. 1
  3581. 1
  3582. 1
  3583. 1
  3584. 1
  3585. 1
  3586. 1
  3587. 1
  3588. 1
  3589. 1
  3590. 1
  3591. If you want fast charging at home even in here in Europe where 3-phase power is common so you can get 16A 3x400 V (19.2 kW) connection pretty easily to the car, the whole electric grid connection of your house may become limiting factor in cost. Our house has "only" 3x25A connection to the grid and while it can be increased with a contract change, the monthly bills for the beefier connection get much higher fast. The initial grid connection for 3x25A connection costs about 1800 EUR (including all taxes) and if you want more, the beefier options are 3x35A for 2500 EUR, 3x50A for 3500 EUR, 3x80A for 5700 EUR or 3x100A for 7100 EUR. If you live further from the existing customers, expect to at least pay 20% more. In addition to that, you get to pay monthly fees for the max current you want from your grid connection. The basic 3x25A connection costs 20 EUR/month (including all taxes) whereas, for example, 3x50A costs 64 EUR and 3x100A costs 155 EUR/month. Of course, that 3x100A connection can only deliver 3x100x400 W or 120 kW so it's still pretty slow compared to proper fast chargers available with CCS connector. And as you can see, the costs of even 120 kW level fast charging at home gets pretty expensive indeed so it really makes little sense to fast charge at home. Going from 19.2 kW charging to 120 kW charging increases your monthly costs about 130 EUR and of course, the initial connection has extra 5000 EUR additional cost. And note that you have to pay these fees for the possibility of fast charging. The actual electricity for the charging obviously goes above the mentioned costs. Suddenly that 19.2 kW sounds pretty nice deal!
    1
  3592. 1
  3593. 1
  3594. 1
  3595. 1
  3596. 1
  3597. 1
  3598. 1
  3599. 1
  3600. 1
  3601. 1
  3602. 1
  3603. 1
  3604. 1
  3605. 1
  3606. 1
  3607. 1
  3608. 1
  3609. 1
  3610. 1
  3611. 1
  3612. 1
  3613. 1
  3614. 1
  3615. 1
  3616. 1
  3617. 1
  3618. 1
  3619. 1
  3620. 1
  3621. 1
  3622. 1
  3623. 1
  3624. 1
  3625. 1
  3626. 1
  3627. 1
  3628. 1
  3629. 1
  3630. Good video! I think it would have been even better with some of the repetion cut away. As for creativity. I think that if somebody feels that they have already created a couple of things they are proud of but are running out of new ideas, redoing previous work with a twist is always an option. For example, if you're drawing a comic characters, simply create a new character every day as a variation of the previous one. Maybe this one has an extra arm? Or a seriously long back? Or perfectly circular face? Or triangle-like face? After creating incremental changes, one per day for a year, you'll have a lot of changes in total even when the whole trip was totally incremental. And you don't need to spend a lot of time per day for the actual implementation – the important part is to think about your work every day and make at least one change. Another way to force yourself to be more creative is to do incremental changes but add artificial limitations for your own methods: "today I must do everything with my non-dominant hand". "Tomorrow I'll use pen and paper but I'll seriously wrinkle the paper before starting. If you're a digital artist working with vectors, you could decide that you cannot use bezier tool today. And another day you decide that bezier tool is the only tool you are allowed to use. I think I personally come up with the most creative solution when the environment is the most restrictive. The examples you had in this video where artists were in situation where they either succeed or possible stop their career match this same situation in my mind: you don't have any options so you have to accomplish your best work using the very limited resources you currently have available. Combined with lots of practice (lots of small incremental changes over time) allows you to fall back on your experience even with limited resources.
    1
  3631. 1
  3632. 1
  3633. 1
  3634. 1
  3635. 1
  3636. 1
  3637. 1
  3638. 1
  3639. 1
  3640. 1
  3641. 1
  3642. 1
  3643. 1
  3644. 1
  3645. 1
  3646.  @Rokaize  Correct Enigma usage required creating nonce and then setting the rings to match the created nonce which would have been used to encrypt that specific message. Operators failed to do this and used a single non-random nonce using the same nonce for multiple messages. I guess they were just lazy because it was easier to operate the machine this way. The correct operating procedure was (1) use wiring diagram and shared 3 ring daily configuration. (2) create nonce and send it twice using the daily configuration, (3) reconfigure all the rings to match the nonce you just sent encrypted, (4) send the actual message. You can skip steps 1, 2 and 3 for all messages for the day if you just re-use the same nonce for all messages! And you can also skip typing the first 6 letters so you can simplify the step 4, too. If the operator didn't understand that they undermine the security, it would be simply stupid to follow the official instructions because you can make the "same job" much easier with constant nonce. This still had the problem that nonce was sent twice which would make it easy to break with modern computers but would have required so much computational power that it would have been impossible to break using the WW2 tech. If random nonce had been used, only the messages that fully identical 6 first letters in encrypted form could have been used to find shared words. And even after that, you would have reversed the ring combination for those messages and you would need to reverse the daily encryption key from that.
    1
  3647. 1
  3648. 1
  3649. 1
  3650. 1
  3651. 1
  3652. 1
  3653. 1
  3654. 1
  3655. 1
  3656. 1
  3657. 1
  3658. 1
  3659. 1
  3660. 1
  3661. 1
  3662. 1
  3663. 1
  3664. 1
  3665. 1
  3666. 1
  3667. 1
  3668. 1
  3669. 1
  3670. 1
  3671. 1
  3672. 1
  3673. 1
  3674. 1
  3675. 1
  3676. 1
  3677. 1
  3678. 1
  3679. 1
  3680. 1
  3681. 1
  3682. 1
  3683. 1
  3684. 1
  3685. 1
  3686. 1
  3687. 1
  3688. 1
  3689. 1
  3690. 1
  3691. 1
  3692. 1
  3693. 1
  3694. 1
  3695. 1
  3696. 1
  3697. 1
  3698. 1
  3699. 1
  3700. 1
  3701. 1
  3702. 1
  3703. 1
  3704. 1
  3705. 1
  3706. 1
  3707. 1
  3708. 1
  3709. 1
  3710. 1
  3711. 1
  3712. 1
  3713. 1
  3714. 1
  3715. 1
  3716. 1
  3717. 1
  3718. 1
  3719. 1
  3720. 1
  3721. 1
  3722. 1
  3723. 1
  3724. 1
  3725. 1
  3726. 1
  3727. 1
  3728. 1
  3729. 1
  3730. 1
  3731. 1
  3732. 1
  3733. 1
  3734. 1
  3735. 1
  3736. 1
  3737. 1
  3738. 1
  3739. 1
  3740. 1
  3741. 1
  3742. 1
  3743. 1
  3744. 1
  3745. 1
  3746. 1
  3747. 1
  3748. 1
  3749. 1
  3750. 1
  3751. 1
  3752. 1
  3753. 1
  3754. 1
  3755. 1
  3756. 1
  3757. 1
  3758. 1
  3759. 1
  3760. 1
  3761. 1
  3762. 1
  3763. 1
  3764. 1
  3765. 1
  3766. 1
  3767. 1
  3768. 1
  3769. 1
  3770. 1
  3771. 1
  3772. 1
  3773. 1
  3774. 1
  3775. 1
  3776. 1
  3777.  @herrschaftg35  I guess the total additional cost of UBI really depends on the existing benefit system. Here in Finland you can already get many different benefits even if you don't work already. I think the unemployment benefit + housing benefit already total around 800-1000 EUR/month. However, if you get even a low pay job, you'll totally lose the unemployment benefit immediately and it's better to stay at home instead of accepting such a job. With full UBI accepting any job would always increase your total income over continuing as unemployed. In countries where existing benefits are much worse, sure, changing directly to full UBI is too hard to step to take in many cases. As for actually implementing UBI, at least here in Finland, the politics has been too complex problem to solve this far. Some people seem to understand UBI as a replacement for unemployment benefit only and when they then assume that all those people still need the housing benefit plus random additional benefits, the system immediately gets too expensive. The whole point of UBI is that it should replace all existing benefits systems so if existing benefit systems have total budget of X EUR per year, that's the budget you can use for full UBI. Also note that here in Finland we have heavy progression in the income tax, too. In practice people with high income will fully pay the UBI back in form of income tax so it doesn't actually increase the costs for those people. The tax receipt would just show higher total income and higher tax and the actual usable income would be about the same as today.
    1
  3778. 1
  3779. 1
  3780. 1
  3781. 1
  3782. 1
  3783. 1
  3784. 1
  3785. 1
  3786. 1
  3787. 1
  3788. 1
  3789. 1
  3790. 1
  3791. 1
  3792. 1
  3793. 1
  3794. 1
  3795. 1
  3796. 1
  3797. 1
  3798. 1
  3799. 1
  3800. 1
  3801. 1
  3802. 1
  3803. 1
  3804. I totally agree with you except for the automatic code formatting. I strongly believe that there should be code formatting rules for a project but it shouldn't be enforced by automated process. Sure, have an automated check to tell if you've broken the rules and optionally allow automatically reformatting the new code but there will be always situations where code can be more readable by breaking the arbitrary rules your project ended up with. A basic example could be line length rules: if the code would be more readable if you go over the line length limit by 7 characters, do it. Wrapping that code on two or more lines might follow the formatting rules but it would result in less readable code. Code readability is the most important thing. Any important piece of code will be written once but re-read many times and you have to make sure that every reader understands it the same way. I strongly believe in self-documenting code but the bar should be that if any member of your team ever has to ask about the code, it's poorly written. If any member of the team fails to understand your code then it's not self-documenting. It's that simple. And in such cases, I always prefer fixing the code and as a last resort, I write some comments. That said, I also try to write short documentation for every function or method (DocBlock) to help people using editors that can show that documentation while modifying the code on the calling site and design-by-contract rules for the caller, without needing to even see the actual well written code. And after writing server software for a couple of decades, I've come to conclusion that all parameters should have explicit info about if the argument is untrusted (raw user input is okay) or trusted (never ever pass any unfiltered user input here). And note that raw user input may come from TCP/IP socket, file, environment string, command line argument, SQL connection or REST API request. If the bytes in the RAM can be affected by entities outside your code, it's untrusted. And untrusted data is contagious so if your programming language doesn't have something akin to Perl taint mode, you have to track untrusted data by yourself from variable to variable. Also, a string is just stream of unknown bytes unless you know the encoding and intent. Many security issues happen because programmers fail to understand the data. For example, SQL injection attacks and XSS attacks are actually the same security problem under the hood: missing or wrong encoding for the context. In case of SQL injection attack, typical problem is using raw string when the actual context is "constant UNICODE string within a string in SQL query" and XSS attack is caused by using raw string when the actual context is "constant UNICODE string within a JavaScript string embedded in SVG embedded in data-URL embedded in attribute string embedded in HTML5 document". Not every context can support raw binary strings but if your function or method takes untrusted string as input, it's your task to encode or otherwise make it safe. If your method cannot accept any random binary input, your method will need to test for binary crap and throw an exception or handle the problem in some other way. Remember that if you don't write this safe code, then every calling site must re-implement it or you'll have security vulnerability waiting in the code. I'm nowadays writing my functions or methods so that any data passed in must be random binary safe unless the parameter is explicitly marked as trusted and in that case the caller takes the responsibility for data safety. And automated tests for that code should actually use random binary test strings to make sure the code doesn't bitrot in the future.
    1
  3805. 1
  3806. 10:20 I think this viewpoint is simply false. Since good IDEs can show last commit that modified each line, you can nowadays have line accurate description of why each line exists in the source code without having human written comments in the source code! However, if you fail to write proper commit messages (documenting why the code is needed), you can never achieve this level. And if you write proper atomic commits with proper commit messages, you always rebase and never merge your own code and everything will be fine. And if you're pulling remote branch and it can be merged conflict free you can do real merge if you really want. If there's a conflict, do not even try to make a merge but tell the submitter to rebase, test again and send another pull request. The single biggest issue remaining after Git is handling huge binary blobs. If you want to have all the offline capabilities that Git has, you cannot do anything better but just copy all the binary blobs to every repository and if you have lots of binary blobs, you'll soon run out of storage. If you opt to having binary blobs on server only, you cannot access those in offline situations or when the network is too slow to be practical for given binary blob. 12:20 This wouldn't be a source control system, it's just a fancy backup system. The problem discussed here is total skill issue only. I personally use Git with feature branches for even single developer hobby projects and I spend maybe 10–20 seconds extra per branch total.
    1
  3807. 1
  3808. 1
  3809. 1
  3810. 1
  3811. 1
  3812. 1
  3813. 1
  3814. 1
  3815. 1
  3816. 1
  3817. 1
  3818. 1
  3819. 1
  3820. 1
  3821. 1
  3822. 1
  3823. 1
  3824. 1
  3825. 1
  3826. 1
  3827. 1
  3828. Great interview! The only question I would have loved to see would have been as follows: If both Rust and C++ existed in their current state with no existing software written either language, why pick C++ over Rust today? I understand that when you have hundreds of millions of lines of existing C++ code, comparing just the language is not the only consideration you should do. However, we should be asking, which language is the best to teach to the next generation and the generation after that? For me personally, even though I know C++ better, Rust seems like a better language in long run thanks to its memory safety and especially its data race free promise. Multithreaded programming is so hard when you mix in shared memory and allocating and freeing resources in multiple threads that it's rare that people can get that correct without a lot of support from the compiler. And Rust seems to be the only language that even tries to fully do this. And I'm mostly interested in languages that have good enough performance. That basically rules out all garbage collection languages such as Java and C#. You only need to check the implementation of those languages to come to that conclusion: both JVM and CLR are written in C++. If Java and C# were actually generic languages, surely their own runtime systems would have been written in those languages, right? In reality, Java and C# performance is poor enough require writing the runtime in C++ (or C or Rust, but C++ was selected for historical or practical reasons).
    1
  3829. 1
  3830. 1
  3831. 1
  3832. 1
  3833. 1
  3834. 1
  3835. 1
  3836. 1
  3837. 1
  3838. 1
  3839. 1
  3840. 1
  3841. 1
  3842. 1
  3843. 1
  3844. 1
  3845. 1
  3846. 1
  3847. 1
  3848. 1
  3849. 1
  3850. 1
  3851. 1
  3852. 1
  3853. 1
  3854. 1
  3855. 1
  3856.  @brodeypecha9233  Jet engines are different from combustion engines because the jet engine is basically a open pipe with multiple fans rotating in it in a configuration where one axle is rotating inside an another one. And the fans closer to the fuel entry location are attached to the axle that's rotating around the another axle. And all the rotation is caused by moving air alone. A jet engine is rotating around 10000–20000 rpm during normal operation but the rotation speed is not directly causing the engine thrust which makes diagnosing the failing engine much harder than with combustion engine. Thrust is combination of fan movement and fuel burning. The dials in the cockpit do not show RPM but percentage of the operational range of that specific axle (labeled as N1 and N2, N2 being the axle that rotates around the N1). I think the biggest problem with old engines like in this aircraft is that they didn't have a dial to show engine vibrations. If you're missing parts of the engine, increasing RPM is not okay. Otherwise it should be okay to try to increase the thrust level if all N1, N2 and EGT are below redline like in this accident for the left engine, which was operating just fine. However, if the engine is missing a fan blade, pushing the power level up might cause the whole engine to explode which may seriously damage the wing if you're unlucky enough. Engines are designed to be strong enough to self-contain even in case of an engine explosion (which is not a guarantee, only a design objective) so I would have pushed left engine to higher power instead of ditching into the ocean even if it could potentially explode because that wouldn't be much worse than ditching into the ocean during the night anyway. That said, the pilots were obviously under much higher stress so making this kind of thinking during the incident might have been next to impossible.
    1
  3857. 1
  3858. 1
  3859. 1
  3860. 1
  3861. 1
  3862. 1
  3863. 1
  3864. 1
  3865. 1
  3866. 1
  3867. 1
  3868. The major problem is poor audio reproduction systems. When audio is originally mixed, the studio uses proper monitors with DSP corrected subwoofers. The frequency carving for the dialogue is designed to work correctly if your audio reproduction system can work correctly. When you emit the final mix through e.g. tiny smartphone speakers, the low frequency signals and high frequency signals have so much distortion that the speech becomes corrupted because that distortion causes extra noise in the important frequency band. If you have trouble understanding the dialogue without subtitles and actors are not actually mumbling their words, try using high quality headphones (preferably proper around the ear headphones) and you'll be surprised how clear the dialogue actually is. I think the peak of audio reproduction systems in general use was around years 1980-2000. After that smartphone speakers and other battery operated tiny speakers got majority. If you truly want to keep using inferior audio reproduction system (e.g. smartphone speaker) then the least bad option is something called dialogue lift which has spoken audio information on separate digital track and it's played back with higher than intended level. This is basically the thing described around 6:05 and it will destroy the feel of the original audio but in return you can hear the dialogue more easily. As you already opted for inferior audio reproduction system, this is probably what you want anyway. I would personally opt for subtitles instead of messing with the audio. And despite the lies that the marketing departments often tell, you cannot get high quality audio from something as tiny as smartphone unless you move the device really close to your ear (e.g. in-ear monitors). The speaker system that I've connected to our TV costs about the same as the TV itself (which is 75" so it's not the cheapest one) and I would consider this as the minimum level that I consider good enough to watch movies and well mixed music. Update: I see you had something related to this around 7:35.
    1
  3869. 1
  3870. 1
  3871. 1
  3872. 1
  3873. 1
  3874. 1
  3875. 1
  3876. 1
  3877. 1
  3878. 1
  3879. 1
  3880. 1
  3881. 1
  3882. 1
  3883. 1
  3884. 1
  3885. 1
  3886. 1
  3887. 1
  3888. 1:50 Example case where things are really bad in car industry in case of a Volkswagen TDI diesel engines. Take about 20 year old VW Passat and try to get VNT turbo vacuum actuator for it from Volkswagen. The part is literally technically identical to parts that were used during 1980s to adjust the spark timing in gasoline engines using a vacuum control except that the bolt holes are in a bit different location and the spring inside the part may have different strength. However, VW will not sell you the part separately but you have to purchase whole turbo assembly instead, to which they have bolted the required actuator already. The official reason for selling this as a package is that the actuator has been pre-adjusted on the factory. The adjustment is done by rotating a single bolt and the correct adjustment is done by applying predefined vacuum and measure that the actuator moves the correct amount (I think it was something like it must move 11 mm with 700 mbar vacuum, the correct amount if officially a trade secret.) I can handle such adjustment myself so I got aftermarket vacuum actuator (costing about 18 EUR) instead of the official package with turbo and actuator (costing about 1700 EUR). Now imagine that they had some kind of patent to prevent 3rd parties from manufacturing the needed actuator (which is literally based on nearly 50 years old technology). So obviously, the service that the manufacturer provides is not the best one even if it actually costs less than replacing the whole device (a car in this case).
    1
  3889. 1
  3890. 1
  3891. 1
  3892. 1
  3893. 1
  3894. 1
  3895. 1
  3896. 1
  3897. 1
  3898. 1
  3899. 1
  3900. 1
  3901. 1
  3902. 1
  3903. 1
  3904. 1
  3905. 1
  3906. 1
  3907. 1
  3908. 1
  3909. 1
  3910. 1
  3911. 1
  3912. 1
  3913. 1
  3914. 1
  3915. 1
  3916. 1
  3917. 1
  3918. Great video as usual! I have one question about the visualization show in this video. Shouldn't the attitude indicator show a lot more blue around 37:17? If I have understood correctly, the attitude indicator is driven by laser gyros and cannot be affected by freezing or other external error sources and should be assumed to be correct in case multiple indicators show data that doesn't make sense as a whole. If you're flying in cruising altitude and don't know which indicators are correct, trying to put the attitude indicator to straight on the center should be always the safe option. If you have engines in TOGA position, even if you were stalling initially in that orientation, it should be turn fine after engines can push you forward enough. And the pilot monitoring could verify if all three fully independant laser gyros agree on the attitude if you truly think that attitude indicator is also faulty. You point about the mental loss of hearing because of high stress could explain a lot of this accident. The replies from pilot flying didn't make much sense in the discussion and since the stall warning was also aural only, it could explain the behavior of the pilot, too. Regardless of the flying altitude, reaction to STALL warning should be to push nose down but this pilot didn't do that; not hearing the warning either would explain that behavior a lot better! It also appears that taking over the controls should be trained in simulator. In this case both pilots seemed to press the little red button on the stick without saying anything to the other pilot.
    1
  3919. 1
  3920. 1
  3921. 1
  3922. 1
  3923. 1
  3924. 1
  3925. 1
  3926. 1
  3927. 1
  3928. 1
  3929. 1
  3930. 1
  3931. 1
  3932. 1
  3933. 1
  3934. 1
  3935. 1
  3936. 1
  3937. 1
  3938. 1
  3939. 1
  3940. 1
  3941. 1
  3942. 1
  3943. 1
  3944. 1
  3945. 1
  3946. 1
  3947. 1
  3948. 1
  3949. 1
  3950. 1
  3951. 1
  3952. 1
  3953. 1
  3954. 1
  3955. 1
  3956. 1
  3957. 1
  3958. 1
  3959. 1
  3960. 1
  3961. 1
  3962. 1
  3963. 1
  3964. 1
  3965. 1
  3966. 1
  3967. 1
  3968. 1
  3969. 1
  3970. 1
  3971. 1
  3972. 1
  3973. 1
  3974. 1
  3975. 1
  3976. 3:25 The important part to understand about GPL is that it requires that the receiver of the source code (that is, buyer of the devices with the GPL'd code running on the chips) is given full copy of the all source code covered by GPL. They can ask for a small fee to cover the data transmission costs (this clause was originally meant to allow billing for the cost of diskettes or a CD-R and it cannot be used to make profit). With modern internet, the cost should be at max the data transmission costs of AWS or Google data centers. As an example, the max cost from AWS to any internet client is 0.09 USD per gigabyte. Full copy of Linux kernel source (covered by GPL) is about 0.3 gigabytes so John Deere could bill about 0.03 USD per copy of their modified version of the Linux kernel. If they have any other GPL'd software running, owners of the devices manufactured by John Deere can request respective source code. And the source code must match the code that John Deere distributes with their devices. The only problematic part could be that if their firmware validates digital signatures for the kernel binary, you cannot swap the modified kernel in place even if you had the full source code. This is called Tivoization and GPL version 3 or greater has terms that prevent this. Linux kernel is distributed with GPL version 2 license that doesn't prevent this loophole for manufacturers that want to make it extra painfull for their customers. However, even with devices with built-in Tivoization you must give the respective source code to anybody who is running the device manufactured by you with the binary version of the GPL'd source code. Note that it doesn't matter if the GPL'd source code was modified or used as verbatim; the manufacturer of the device is required to give out the source code if requested.
    1
  3977. 1
  3978. 1
  3979. 1
  3980. 1
  3981. 1
  3982. 1
  3983. 1
  3984. 1
  3985. 1
  3986. 1
  3987. 1
  3988. 1
  3989. 1
  3990. 1
  3991. 1
  3992. 1
  3993. 1
  3994. 1
  3995. 1
  3996. 1
  3997. 1
  3998. 1
  3999. 1
  4000. 1
  4001. 1
  4002. Our big fridge has an optional feature (with a toggle switch inside) to keep one box in very accurate temperature around +0.1 °C and it doesn't have documentation about the implementation but everything suggests that it has a peltier element between the inside of the fridge and the inside of the box. Our fridge is around +2.0 °C otherwise so this peltier element only needs to cool down on average 1.9 °C and it can do that very accurately. Of course, that little box requires additional temperature sensor (or multiple because the whole point of this box is accurate temperature control) and since the whole fridge is already controlled by a small CPU on a small motherboard, having a couple of extra sensors and a controller for the peltier element is pretty easy. The manufacturer does warn that if you want to achieve the official efficiency posted on the ad, you cannot enable this feature. This obviously suggests that the implementation is not very efficient which points to peltier element. The advertised use for this box is to keep highly sensitive foods as close to freezing temperature as possible without accidentally freezing the foods even momentarily. Peltier elements are also used in physics experiements for accurate temperature control. In that case, the peltier element is used between main cooling element and the actual scientific instrument the temperature of which needs to be controlled accurately. As you can run peltier element with PWM controller in closed loop setup with a simple microcontroller, you can achieve highly accurate temperature control as long as your main cooling setup can get close enough to target temperature even without the peltier element.
    1
  4003. 1
  4004. 1
  4005. 1
  4006. 1
  4007. 1
  4008. 1
  4009. 1
  4010. 1
  4011. 1
  4012. 1
  4013. 1
  4014. 1
  4015. 1
  4016. 1
  4017. 1
  4018. 1
  4019. 1
  4020. 1
  4021. 1
  4022. 1
  4023. 1
  4024. 1
  4025. 1
  4026. 1
  4027. 1
  4028. 1
  4029. 1
  4030. 1
  4031. 1
  4032. 1
  4033. 1
  4034. 1
  4035. 1
  4036. 1
  4037. 1
  4038. 1
  4039. 1
  4040. 1
  4041. 1
  4042. 1
  4043. 1
  4044. 1
  4045. 1
  4046. 1
  4047. 1
  4048. 1
  4049. 1
  4050. 1
  4051. 1
  4052. 1
  4053. 1
  4054. 1
  4055. 1
  4056. 1
  4057. 1
  4058. 1
  4059. 1
  4060. 1
  4061. 1
  4062. 1
  4063. 1
  4064. 1
  4065. 1
  4066. 1
  4067. 1
  4068. 1
  4069. 1
  4070. 1
  4071. 1
  4072. 1
  4073. 1
  4074. 1
  4075. 1
  4076. 1
  4077. 1
  4078. 1
  4079. 1
  4080. 1
  4081. 1
  4082. 1
  4083. 1
  4084. 1
  4085. 1
  4086. 1
  4087. 1
  4088. 1
  4089. 1
  4090. 1
  4091. 1
  4092. 1
  4093. 1
  4094. 1
  4095. 1
  4096. 1
  4097. 1
  4098. 1
  4099. 1
  4100. 1
  4101. 1
  4102. 1
  4103. 1
  4104. 1
  4105. The real problem here is that Samsung is using patents to do this. If they tried to use their trademarks or copyright, they couldn't prevent 3rd party parts which are clearly marked as 3rd party parts. However, overly broad patents being the catch-all for practically anything allows Samsung to prevent any competition for their parts. It doesn't matter if the 3rd party manufacturers are actually using the patented technology to manufacture the parts because too broad patents basically cover anything. And invalidating already granted patents is next to impossible in many countries thanks to judges not understanding the issue. This is about Samsung using nuclear weapons called patents against everybody else. The patent system was originally created to allow society to benefit from publishing inventions and in response the original inventor got limited time monopoly for their invention. However, currently it appears that patents are simply causing huge harm to society on all industries. The current patent system should be torn down but it's an international system similar to copyright system where the Berne Convention was supposed to be ongoing process to tweak the rules to balance the benefit to society vs benefit for the IP owner. However, in practice the Berne Convention is dead in water and we're locked in the rules from the 1980's which match really poorly with modern internet. In addition, patent system never even had anything close to Berne Convention so the rules have been broken always.
    1
  4106. 1
  4107. 1
  4108. 1
  4109. 1
  4110. 1
  4111. 1
  4112. 1
  4113. 1
  4114. 1
  4115. 1
  4116. 1
  4117. 1
  4118. 1
  4119. 1
  4120. 1
  4121. 1
  4122. 1
  4123. 1
  4124. 1
  4125. 1
  4126. 1
  4127. 1
  4128. 1
  4129. 1
  4130. 1
  4131. 1
  4132. 1
  4133. 1
  4134. 1
  4135. 1
  4136. 1
  4137. 1
  4138. 1
  4139. 1
  4140. 1
  4141. 1
  4142. 1
  4143. 1
  4144. 1
  4145. 1
  4146. 1
  4147. 1
  4148. 1
  4149. 1
  4150. 1
  4151. 1
  4152. 1
  4153. 1
  4154. 1
  4155. 1
  4156. 1
  4157. 1
  4158. 1
  4159. 1
  4160. 1
  4161. 1
  4162. 1
  4163. 1
  4164. 1
  4165. 1
  4166. 1
  4167. 1
  4168. 1
  4169. 1
  4170. 1
  4171. 1
  4172. Fast acceleration (when executed correctly) will always save fuel because you can switch to higher gears earlier. In addition, non-turbo engines are most economical on full throttle because pumping losses are minimal in that case. For turbo engines, the most economical acceleration is non-trivial. Obviously, if your tires start to slip, you'll be losing your economy in spinning wheels so if your engine is "too powerful", you cannot run it the most economical way. And any clutch slipping you're doing will also ruin your economy because it turns engine power to heat. In practice, with manual gearbox you get the best economy by taking off with low RPM, switch to 2nd with pretty fast clutch action and then go full throttle if wheel traction is good enough. Human reaction time is just not good enough to do full throttle take off with 1st gear without tires slipping or clutch slipping a lot. And if you have computer assisted launch control, it usually uses brakes to keep traction which is obviously the least economical option of all. Also, for max economy you would have to find BSFC (Brake Specific Fuel Consumption) for your engine and find optimal gear switch positions. Usually you should accelerate with full throttle and change gears maybe 500–1000 rpm above the peak torque, which in case of TDI engine might be near 3600 RPM (the idea is to hover near the optimum RPM point before and after each gear switch). I haven't used TSI engines myself but I'd guess those should have optimum point near 3800 RPM for upshifting.
    1
  4173. 1
  4174. 1
  4175. 1
  4176. 1
  4177. 1
  4178. 1
  4179. 1
  4180. 1
  4181. 1
  4182. 1
  4183. 1
  4184. 1
  4185. 1
  4186. 1
  4187. 1
  4188. 1
  4189. 1
  4190. 1
  4191. 1
  4192. 1
  4193. 1
  4194. 1
  4195. 1
  4196. 1
  4197. 1
  4198. 1
  4199. 1
  4200. 1
  4201. 1
  4202. 1
  4203. 1
  4204. 1
  4205. 1
  4206. 1
  4207. 1
  4208. 1
  4209. 1
  4210. 1
  4211. 1
  4212. 1
  4213. 1
  4214. 1
  4215. 1
  4216. 1
  4217. 1
  4218. 1
  4219. 1
  4220. 1
  4221. 1
  4222. 1
  4223. 1
  4224. 1
  4225. 1
  4226. 1
  4227. 1
  4228. 1
  4229. 1
  4230. 1
  4231. 1
  4232. 1
  4233. 1
  4234. 1
  4235. 1
  4236. 1
  4237. 1
  4238. 1
  4239. 1
  4240. 1
  4241. 1
  4242. 1
  4243. 1
  4244. 1
  4245. 1
  4246. 1
  4247. 1
  4248. 1
  4249. 1
  4250. 1
  4251. 1
  4252. 1
  4253. 1
  4254. I think gamification is good for figuring out which accounts are real users and which accounts are bots or spammers. The "reputation" (which I consider karma in reality) on StackOverflow gets you more admin tools once you demonstrate enough sensible behavior. For example, with my current reputation I could go around the site and modify the descriptions of all the tags and mess things out seriously bad. So I understand why those actions are not available to any random spammer. But I think it's a really good idea to give more admin-like powers to users of the site as long as they demonstrate behavior that aligns with the sites objectives. Other than the ability to do actions that are not allowed for newly created users or users with bad karma (e.g. spammers or bots) I don't really care about how much reputation I have. If some future employer were ever interested in that kind of statistics, sure, it would be nice to be able to show that I have this much reputation on StackOverflow. I still feel that the reputation is thanks to sensible behavior, not because I've gamed the system. Gamification results in some users primarily answering only easy questions that get lots of traffic via Google. I don't like that but I understand that it's good for the site because Google gives more value to StackOverflow pages if lots of users looking for simple answers click the StackOverflow page in the results. So even though I don't personally think those answers are worth a lot, I understand why even that kind of gamification benefits the whole site.
    1
  4255. 1
  4256. 1
  4257. 1
  4258. 1
  4259. 1
  4260. 1
  4261. 1
  4262. 1
  4263. 1
  4264. 1
  4265. 1
  4266. 1
  4267. 1