Youtube comments of Mikko Rantalainen (@MikkoRantalainen).
-
4300
-
560
-
517
-
410
-
365
-
351
-
311
-
285
-
268
-
249
-
244
-
232
-
226
-
225
-
171
-
169
-
129
-
116
-
112
-
105
-
105
-
102
-
96
-
95
-
95
-
94
-
92
-
90
-
89
-
87
-
87
-
86
-
83
-
82
-
81
-
78
-
78
-
77
-
76
-
76
-
76
-
75
-
74
-
72
-
71
-
71
-
70
-
69
-
66
-
66
-
64
-
62
-
60
-
58
-
58
-
56
-
55
-
55
-
55
-
54
-
53
-
52
-
49
-
49
-
48
-
48
-
48
-
47
-
47
-
47
-
46
-
46
-
44
-
43
-
42
-
42
-
40
-
39
-
37
-
37
-
36
-
36
-
36
-
36
-
35
-
35
-
35
-
35
-
34
-
34
-
33
-
33
-
33
-
33
-
33
-
32
-
32
-
31
-
31
-
31
-
31
-
30
-
29
-
29
-
29
-
29
-
29
-
28
-
28
-
28
-
27
-
27
-
27
-
27
-
27
-
27
-
27
-
26
-
26
-
26
-
26
-
25
-
25
-
25
-
24
-
24
-
24
-
23
-
23
-
23
-
23
-
23
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
22
-
Some things are just hard enough to take a lifetime...
Donald Knuth has been writing a book about computer programming called "The Art of Computer Programming" since year 1968 and it isn't yet ready either. The first 3904 pages have already been published but the book is still unfinished and it remains to be seen if he'll ever finish the book because he is already 85 years old. The "Volume 4B: Combinatorial Algorithms, Part 2" consisting of 714 pages was published in 2023 so he's still actively working on it.
He also has pre-print versions (called "pre-fascicle") of volumes 5A, 5B, 5C, 6A, 7A, 8A, 9B, 12A and 14A publicly available, but he doesn't yet consider those final, if I've understood correctly.
21
-
21
-
21
-
21
-
20
-
20
-
20
-
20
-
19
-
19
-
19
-
19
-
19
-
19
-
19
-
18
-
18
-
18
-
18
-
18
-
18
-
18
-
17
-
17
-
17
-
17
-
17
-
17
-
17
-
17
-
17
-
17
-
17
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
16
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
12
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
Sound a bit similar to Finnish. The written form was mostly designed by a single person and it makes sense. However, learning the language is really really hard because it uses multiple suffixes instead of prepositions. For example, word "istua" meaning "to sit" can be modified to "istuisivatkohan" meaning "I wonder if they would sit", or "istuivatkohan" meaning "I wonder if they did sit". The first example "istuisivatkohan" actually has 5 suffixes -i, -si, -vat, -ko and -han at the same time and you have to use these suffixes in this specific order or the word doesn't make sense. I think every verb can be modified to over 3000 different forms just by using different suffix combinations.
On the plus side, there are no silent letters, no vovel modifications nor special hyphenation rules.
I think English – if it had spelling fixed to match pronounciation, or pronounciation fixed to match spelling – would be much better language overall. Unfortunately, there's no single entity in the world that could fix the language called English. If British were to try it, everybody else would just ignore such change. If the USA were to try it, everybody else would just ignore such change.
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
@Kabivelrat The material that emits the actual photons is made out of organic compounds (the letter "O" in OLED) and that's the wear item. Because each pixel has individual light source, each pixel will wear at different rate depending on what the display has been used over its whole lifetime.
Running the display very bright causes individual pixels to be run with higher current which causes more wear to each pixel.
Basically the best OLED manufacturers can do is to estimate wear for every pixel and automatically compensate for the wear. In practice, this is implemented by logically having pixels that could emit 500 cd when run at full blast and the display normally limits the max brightness around 300. After the pixel has displayed enough light over its lifetime, its estimated brightness is used to compensate for the actual output and to emit 300 cd for an old pixel, it may require current that would have resulted in 450 cd as new.
However, this technique requires running the pixels with forever increasing current levels when the display ages and the more current you pump to individual pixels, the faster those pixels wear. As a result, you can prolong the life of the display only so much with this trick.
In addition, if the compensation algorithm has poor match with the reality, the display will show some burn-in artefacts even with compensation being active.
In the future, we'll hopefully have microled based displays where each pixel is run with direct semiconductor LED elements which do not have similar wear during use. Of course, LED elements fail over time, too, but the failure typically happens much later and not because of wear but simply as a result of poor luck. However, microled displays are really really expensive to manufacture today because nobody has figured out how to make huge semiconductor elements for cheap.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
This was first introduction to Anglish for me, but it seemed to make much more sense than modern English. Anglish seemed to have a lot less discrepancy – probably because all the words have the same logic instead of mismatch between different languages.
At least for somebody like me, to whom English is non-native language, Anglish would be an easier language to learn. It also seems that the pronounciation of the Anglish words has much less weirdness.
I would love to see Pale Blue Dot speech in Anglish but I cannot do it myself. It appears that ChatGPT is not fluent with Anglish either. However, I was able to get it to emit following:
"Look at that speck, that is us, all man-kind, on that mote of dust, in the vastness of space. That is the Earth, our home, a speck in the cosmic ocean, no bigger than a pixel in a photograph."
I would guess "photograph" should be something else, though.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@paulfogarty7724 Considering all the crap people are saying about Ryanair, I had to check this.
I quickly went through Ryanair's incident history and it indeed appears that there has never been a crash. In fact, the most serious stuff I found was as follows:
FR-654 in 2019 had FO incapacitated and captain returned for a safe landing.
FR-3918 in 2018 had FO incapacitated and the captain declared Mayday in line with standard operating procedures and diverted to Trapani.
FR-7312 in 2018 had loss of cabin pressure, the crew reacted in 2 seconds and initiated an emergency descent as expected.
FR-1192 in 2018 had near collision when two planes got within 2.2 nm from each other; the cause was the failure of the PAL sector controller to identify the conflict in time.
FR-314 in 2017 Overran runway on landing and came to a stop on the paved surface of the runway. There were no injuries.
FR-4060 in 2017 had a tailstrike and returned for a safe landing after burning enough fuel.
FR-817 in 2016 declared pan pan because icing caused engine problems. The aircraft made a normal approach and landing at Dublin and all passengers disembarked normally.
FR-2446 in 2014 a loss of separation occurred because of controller error.
FR-2848 in 2014 had a near collision due controller failure, the separation between the aircraft reduced to 100 feet vertical and 1.4nm lateral.
FR-3152 in 2013 had captain incapacitated, FO diverted the aircraft to Faro (Portugal) 160nm southeast of their position for a safe landing on runway 10 about 25 minutes later.
FR-3595 in 2013 had the separation between the aircraft reduced to 0.8nm laterally and 650 feet vertically involving a high risk of collision. The cause seemed to be both the controller and the crew not following proper radio protocol (not using their call-sign, not requiring read back).
FR-1664 in 2012 had right pitot heating failed without indication causing instrument malfunction and the crew correctly diagnosed the problem and continued for a safe landing.
Some flight in 2011 with Boeing 737-800 had FO incapacitated, the aircraft landed safely on Girona's runway 20 about 45 minutes after the first officer handed the controls to the captain.
Some flight in 2010 with Boeing 737-800 had to declare mayday after diverting to alternate airport and the crew inadequate decision-making causing the fuel amount dropping below the required minimum reserve fuel. Legal minimum was 1139 kg and after landing the aircraft had only 956 kg.
In 2010, a little girl fell through the gap between the handrail and the platform of the stairs during boarding in Spain. The girl received fractures of the ulna and radius of the left forearm. The CIAIAC analysed that although the extendable handrails protect sufficiently against the falls of adults, the gap between the handrail and the platform represents a danger for small children to fall through the gap.
In addition to that, there were technical issues causing diversions but I couldn't find anything really serious. For example, FR-7411 in 2019 had to shutdown one engine in-flight due to lack of oil pressure.
Seems surprisingly good track record for any airline with similar amount of flights!
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I'm a software developer (44m) that used to think about 5 years ago that we'll have AGI around year 2050 but I'm currently thinking we'll have human level AGI (IQ 100) around year 2030. I'm currently thinking that around year 2050 we'll have superhuman AGI that can do everything better than any human being. That is, a single AGI system can do philosophy, mathematics, compose music, create movies, design new robots, write new software and control ships, airplanes or fighter jets better than the top of the top people of each respective field. And as a corollary, it will create better improvements to itself than any group of humans could do.
I truly believe that humans are a boot routine for a superhuman AI. And I think it's a good thing as a whole because I see only science, culture and art as the final output of humankind. If superhuman AI is better for each those fields, we should build the superhuman AI to advance those fields.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@helmandblue8720 Suicide statistics of Finland suggest that pre-emptive mental healthcare needs a lot more effort. Finland has, in theory, mental support system from elementary school forward but in reality the resources are not there. In theory, every child should have access to "koulukuraattori", the mental health specialist worker in schools. In reality, there's maybe one koulukuraattori per 500 – 2000 children in most municipalities. How good work can you do with that amount of child you're supposed to take care of?
Another big problem is bullying in schools where one misbehaving children can cause a lot of grief in the school because the options that the teachers and school can take are so limited by the current system. I think the bulling child should be given a lot more mental care and typically the parents of that child would also need a lot of mental support in reality. However, the system is failing there right now.
That said, the suicide rates in Belgium, United States, Latvia, Belarus and Russia are higher than in Finland. Finland has about 13.4 suicides per 100000 population and Norway (which is doing better) has 11.8, and Sweden is in between with 12.4 suicides per 100000 population. United Kingdom has 6.9 suicides per 100000 population so that part is definitely going better in the UK.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
For example, you can scroll pretty much everything with grabbing the view with middle mouse button and dragging the mouse. And that works with literally anything, menus, viewports, timeline, control panels etc. You don't need to bother with scrollbars or something like that. And also try shift+middle mouse button, it does different things for nearly everything. You can copy-paste pretty much anything just by hovering something and pressing ctrl+c and hovering where you want it and press ctrl+v.
Also, as e.g. G (grab) is modal action, you can press shift while you're moving stuff to position it more accurately (it makes your mouse like 20x slower for moving the item you're moving) or you can directly type the position as a number (there isn't any numeric entry box but you can see already typed value in the bottom left by default - and by default I mean that you can reposition nearly everything in the UI).
You definitely want to learn basic actions such as Grab, Size, Rotate by heart so you do actions by pressing a keyboard button with left hand, adjust the value by moving the mouse and confirm the action with mouse button. And the another mouse button is cancel if you're not happy with the result so you don't often need to use undo because nearly all actions can be done with action + adjustment + confirm pattern.
2
-
2
-
2
-
2
-
@Robbya10 I think that any optical system may miss at least one of small dust, oil, frozen surface, too smooth surface vs stone like surface, moisture etc. The thing you're actually interested is how much friction you actually have between the surface and currently available rubber thread on your wheels and the only way to make sure is to actually test for exacly that.
The problem is that rubber is not actually solid under pressure (rubber is somewhere between solid and liquid) so the only way to actually measure the friction is to apply enough pressure and braking force to get the rubber under realistic shear and compression forces. And at that point, you could just use your real wheels for measuring the actual surface friction.
For example, every 10 seconds, try braking one (random?) tire up to shear limit while other 3 tires are pushing forward to avoid losing speed so you can do this without passangers noticing anything. If you do the braking with regenerative braking only you can recapture about 80% of the energy so that wouldn't be that waistful. Basically the only problem is that it would cause extra tire surface wear.
You could also have additional sensors. For example, using microphones to listen how tires sound while they touch the surface and if the sound doesn't change, assume that the surface is still similar and you don't need to retest the friction. You could also use humidity sensors and temperature sensors to figure out if there's risk of surface ice or aquaplaning. For dry and warm locations without dust on the road, there's no need to check for the friction very much. However, on roads near 0 °C with high relative humidity, black ice is a real threat and there you might want to check every 2 seconds etc.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I'm a bit over 40 and I've noticed that as I get older, I'm listening more complex music such as symphonic metal or progressive rock instead of plain old rock. And I've also noticed that bands with highly skilled singers sound more interesting to me. As a teenager I mostly listened to instrumental music – I guess I didn't like the singers of most bands but never were able to figure out why. Nowadays I'm listening, for example, Nightwish, Epica, Hans Zimmer music, Thomas Bergersen, Wintersun, Ayreon, Seal, Mustasch, The 69 Eyes and Poets of the Fall. However, I also like some more simple songs by, for example, Battle Beast, Beast in Black, Taylor Swift, Gavid Guetta, Camila Cabello, John Legend, Sia, Clean Bandit, Muse and Ariana Grande.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I totally agree! And even if you're the only member of the team, clearly separating the code from the tools allows you to switch tools later if you find a better tool.
I mostly write PHP code in RAII style for my daily job (doing that for nearly 20 years already) and I've gone through xedit, kate, jedit, Eclipse PDT, PhpStorm and VS Code. Each has had pros and cons. It appears that the DLTK library that the Eclipse PDT is based on understands complex class hierarchies better than any of the other tools but Eclipse often has performance problems. PhpStorm has somewhat similar experience – it can decipher the code and do some sanity checking automatically and it can find some things that Eclipse cannot find and vice versa. And PhpStorm, running on JVM, too, has its own performance problems, too. VS Code seems like a lot dumber when it comes to how much it understands about the code (this may be actually caused by the PHP Intelephense extension that you practically have to use to make VS Code to understand anything about PHP), but the VS Code has very stable performance: always acceptable but never truly great. The biggest problem I have with VS Code right now is that you cannot have code line level blame active while writing code. And the diff view between workspace and history is read only!
And all the Git tools in Eclipse, PhpStorm or VS Code are actually pretty week. I prefer "git gui" and "gitk" any day compared to any of those. The gitk may not have the flashiest graphics but it can easily handle project histories with over 10000 commits unlike most graphical tools. And "git gui blame" has better tools for figuring true history of a given line of code than any other tool. And git gui has superior interface for committing lines instead of files. VS Code makes it really hard to build commits based on specific lines only over multiple files, instead of snapshotting everything in the working directory.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
IMO, I think this can be battled the same way AI should be trained even today: start with a small amount of trusted data (hand selected articles and books from various fields verified by experts). Then for every new piece of content, make the AI estimate the quality of the data (can it be deduced from the trusted data?) and skip training if the content is not (yet?) considered high enough quality. Note that the quality of the data is about it being well supported by the existing knowledge. Do this multiple times to create chains of high quality data (that is, the potential content was not deemed trusted earlier but now that the AI has learned more, it will estimate the quality of the same data differently).
Keep track of the estimated quality of a given piece of data and recompute the estimated quality again for all documents every now and then. If the quality estimate of the AI is good enough, it should increase the estimated quality of the content over time (because more chains allows accepting more new data) and cases where previously trusted content turns on untrusted later would point out problems in the system.
Also run the estimation against known good high quality data not included in the training set every now and then. These should be considered high quality data but the AI may fail to identify the data correctly, which would demonstrate lack of general understanding by the AI.
Once you demonstrate that the estimated quality matches well enough with the expert evaluations of the same content, you can start to train the AI to understand misunderstandings of humans, too. Train low quality content as examples of humans failing to think / research correctly.
In the end, you should have an AI that can successfully estimate quality of any new content and automatically use it to either extend its knowledge (chains of known good content) or to automatically learn it as an example of low quality content that the AI should avoid but the AI should be made aware of.
If the AI doesn't have negative feedback from failed human content, it cannot understand failures in tasks given to said AI.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think that planned obsolescence as an intentional process is nonsense, too. Manufacturers just optimize to minimize their responsibility and if the device lasts the warranty period, it's no longer the responsibility of the manufacturer.
Ask for warranty periods where it starts to make sense for the manufacturer to repair things to honor the warranty and things will get better automatically.
If it's cheaper for the manufacturer to replace your whole device in case of warranty instead of repairing it, they will give zero effort to make the device easier to repair. This is result of most devices being good enough to last the warranty period so having the replace a single device in case of rare incident, it's cheaper overall to just give a full new device in case of even a minor failure.
And as a bonus, most people actually like to have fully new device in case they hit any warranted failure. For example, Bose is known to give you full new device in original factory package in case of any error in their products. They can sell their products with extra premium because their customers can trust that in case of problems, they will get a totally new replacement no questions asked.
Of course, that requires that the warranty is really strict about what's covered and what's not or everybody is going to receive new devices which would get too expensive for the manufacturer to continue.
If you accept e.g. a smartphone that has one year warranty for the hardware and software support is EOL'd 18–24 months after the release, you're part of the problem!
That said, the fact that manufacturers are allowed to hide the tools needed to do repairs is the biggest issue with right-to-repair. But it has nothing to do with planned obsolescence.
1
-
1
-
1
-
7:15 I think clerical errors are going to happen in future, too. You cannot avoid that. However, you could have a sensible system where you cannot have lien or warranty set on you or your business without you being immediately aware of it. And when you're aware of it, you can object the clerical error. As such, this whole thing is not the result of one clerical error but systematic failure of the system as a whole. You just were unlucky to be the one that suffered from this one error.
I've been watching a lot of videos by Mentour Pilot and aviation industry doesn't even pretend that the employees never make mistakes, no matter how much training they have. Extra training can only reduce failure rate, not eliminate it. Instead, the whole system is designed on the basis that errors happen but those errors can be catched before people start to suffer. Of course, if you're unlucky enough to have 3, 4 or 5 independent failures occurring simultaneously, things can still go sour. But with a good system design, you can at least avoid single point of failure.
It's clear that that the system had single point of failure where single clerical error can mess up your life for 7 years until you finally accidentally noticed it!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
And they needs lots of experience. To stop on a spot from high speed, you have to have pretty good understanding of the sharpness of your blade, the condition of the ice (how smooth it is, how much snow on top) and the temperature of the ice. Then you just feel the pressure in your feet during the breaking. You start the breaking with knees bent and if you need more stopping power, you press faster towards to ground, if you need less stopping power, you release some pressure and turn the blade a bit towards the 90 degree orientation.
If the ice is smooth enough, you can do the breaking movement with the blade nearly perfectly orthogonal to the ice and the breaking power is really low. The blade just skims the surface. However, your weight will go to the outside edge really easily if the ice is not perfectly smooth.
When the speed is mostly slowed down (say about 5 km/h or less), you can basically hop on skates during the braking action to instantly stop by pushing the edges into the ice. That will stop the skates and the remaining energy must be transferred to your muscles and pushing you upwards (you still have knees bent, right?). However, if you estimate your speed incorrectly, your muscles are not strong enough to take the instant stop immediately or your knees are not bent enough, and your weight will go the outside edge and you'll fall. It's not dangerous but will look a bit embarrasing.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I guess "xda" was supposed to be Extended Digital Assistant. I still think that custom ROMs would still be better than OEM firmware but DRM crap prevents me from running custom ROM today because so many apps look for hardware DRM these days and with DRM running on ring -1 and OS on ring 0, there's no way to break the hardware DRM without modifying the apps you run.
Make no mistake, true DRM doesn't exists if you own the hardware but if you don't own your hardware, then you cannot fake hardware based DRM. Remote systems can check if the system is running OEM firmware because hardware DRM allows remote attestation. The best we currently have is workarounds that make the user mode program believe that the device doesn't have hardware DRM and instead the software must accept soft DRM which is easy to fake. Passing the remote attestation doesn't guarantee that the system has been rooted at runtime but it does guarantee that the system has non-modified boot sector, assuming the DRM hardware is working.
This is obviously easy prevent in user mode apps simply by not accepting fallback to non-hardware DRM for remote attestation. And since Android 8.0, no OEM has been able to release new devices with pre-installed Google Play Store unless the hardware passed CTS which enforced hardware-based SafetyNet. As a result, app developers could stop supporting non-hardware attestation any day now and only lose customers still running Android 7.0 or older. That's basically nobody, so there's no practical benefit for allowing non-hardware attestation for the software developers!
Basically the only way to break the hardware based SafetyNet is to find a vulnerability in the firmware boot sequence to get your own code running on ring -1 to allow faking hardware DRM requests. And if this gets common, Google can simply blacklist that specific OEM identity to always distrust any hardware DRM attestation from that specific hardware. As a result, if you know how to fake hardware SafetyNet on some hardware, you cannot tell about it publicly if you want to keep that ability! As a bonus, Google will pay you minimum of 100K USD if you tell them how to bypass the hardware based attestation on any hardware so there's kind of incentive to not try to hide your work. The hardware attestation is based on digital signatures and device specific digital key that can sign messages that can be verified on remote servers. As a result, if e.g. Netflix wants to enforce DRM, they can setup their app connection to their network to require hardware attestation for login. If you block the DRM data or try to fake it, the device can no longer connect to Netflix network because Netflix knows that (1) all relevant hardware supports hardware based DRM so you cannot modify the response to claim that your hardware doesn't support hardware based attestation, and (2) the hardware response is digitally signed so you cannot change the response without failing the attestation.
Obviously DRM for offline situations can still be broken at will. But for online stuff, the remote attestation cannot be broken.
For me, the single most important thing to root an Android device is to get a fully working backup solution. I hate that I cannot fully backup my Android device and the only way to fix it is to break all software that wants to look for DRM hardware attestation. There's no way to have both DRM remote attestation and a working backup solution (that can restore everything in case your hardware fails and you replace it with identical hardware). As a result, I nowadays have only partial backups (basically what adb backup allows). Even iPhone has better backups!
If you only wanted root and accept unsafe OS, you could simply skip all the security updates and re-root the whole OS at runtime after every boot to get root access and still keept the OEM bootloader and firmware, using a security vulnerability that allows getting root access with non-modified OEM software. However, that doesn't allow running TWRP which would be required for full backup and restore. Using runtime rooting would still allow using TitaniumBackup for installing and restoring software but you would need to be running knowingly vulnerable OS which allows any other untrusted software to also root your system. Which is obviously non-safe unlike running a properly rooted Android.
I no longer own my phone and I hate it. And if Apple ever allows running other browsers than Safari, I no longer have a reason to use Android instead of iPhone because then both ecosystems are equally limited!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Finnish is way more advanced here. Not only it doesn't have gender for nouns, it also doesn't distinguish between he vs she in 3rd person references. In Finnish, the 3rd person is referenced as "hän" and it can refer to any human being: man, woman, child, elder or baby. Finnish still has "se" meaning "it" which is used for all non-human references such as dogs, cats and bees.
Finnish also doesn't have the concept of definite nor indefinite article which makes English harder to learn for Finns because it takes really long time to figure out any logical reason to use "a" or "the" for any reason. (As a Finn, I still think definite and indefinite articles are equally needless as silent letters.)
As an another twist, Finnish doesn't have future tense either. It's expressed with alternative ways, such as "aion matkustaa huomenna junalla" which would be translated directly as "I have a plan to travel by train tomorrow" instead of "I'll travel by train tomorrow".
All the above doesn't mean that Finnish is an easy language by any measure. When we have over 3000 different inflection forms for every verb thanks to ability to combine multiple suffixes in the base form of a verb, it'll be pretty hard to learn that when your own language has nothing similar. Thanks to inflection forms the word order is mostly stylistic choice. For example "aion matkustaa huomenna junalla" is same as (more poem-like style) "matkustaa junalla huomenna aion" or "huomenna aion matkustaa junalla" (which would put more emphasis that the travelling will be done tomorrow). As a general rule, word order is used to express emphasis and the most important thing is put in the front.
1
-
1
-
1
-
1
-
The C/C++ short, int and long are always integers that have defined minimum size and the actual size is whatever the hardware can support with maximum performance. If some hardware can process 64 bit integers faster than 16 bit or 32 bit integers, short, int and long could all be 64 bit integers. That was the theory anyway. In practice, due historical reasons compilers must use different sizes as explained in the article.
The reason we have so many function call conventions is also performance. For example, x64-64 sysv calling interface is different from x86-64 MSVC calling convention because Microsoft interface has a bit worse performance because it cannot pass equally much data in registers.
And because we need to have backwards compatibility as an option, practically every compiler must support every calling convention ever made, no matter how stupid the convention was from technical viewpoint.
It would be trivial to declare that you use only packed structures with little endian signed 64 bit numbers but that wouldn't result in highest possible performance.
And C/C++ is always about highest possible performance. Always.
That said, it seems obvious in hindsight that the only sensible way is to use types such as i32, i64, u128 and call it a day. Even if you have intmax_t or time_t somebody somewhere will depend it being 64 bit and you can never ever change the type to be something else but 64 bit. It makes much more sense to just define that the argument or return value is i64 and create another API if that ever turns out to be bad decision.
The cases where you can randomly re-compile a big program in C/C++ and it just works even if short, int, long, time_t and intmax_t change sizes is so rare that it's not worth making everything a lot more complex. The gurus that were able to make it all work with objects that change sizes depending on underlying hardware will be able to make it work with a single type definition file that codes optimal size for every type they really want to use.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I fully agree that comments/documentation should be considered mandatory for any code that's supposed to live long – that is, maintained and developed further.
However, I don't believe in commenting individual lines but whole functions/methods. My rule of thumb is that if a method is public (usable by external code) it should have documentation/promise about what it does and basically state Eiffel-like design-by-contract about the supported inputs. Whether you write it as docblocks above the method implementation or in form of automated tests really doesn't make a huge difference but you should have clear documentation about what the code is supposed to do. That way you can figure out if the implementation actually matches the original intent when you later need to modify the code. Without documentation you cannot know if handling of some specific edge case is intentional or a bug in implementation.
I prefer docblock-style comments in mostly English but I'm getting more and more strict about having to declare if any input parameter and results are trusted data or not. All input (user generated data, files, network sockets, config files) should be considered untrusted and anything directly computed from untrusted data shall be considered tainted, too, and as such, untrusted data, too. If you write all code like this, you end up having a lot less security issues in your code. And for all input and output string values, you have to declare the encoding in the documentation. The input encoding might be untrusted UNICODE string and output could be trusted HTML text fragment – in that case the implementation must encode all the HTML metacharacters or there's a bug in the implementation. Without a docblock you cannot know if that's intented or not.
That said, private methods (in case of class/object oriented programming) do not need to have any documentation because those are just part of the implementation. I also don't think automated unit tests should even bother testing private methods directly but just the behavior or any public methods. I'm on a borderline if even protected methods should be tested with unit tests – I'm currently thinking that if no public method actually uses any private or protected method, those methods are just dead code and should be deleted instead of writing unit tests.
In the end, when I write some code and a team member needs to ask me about the implementation (during code review or later) then I usually end up fixing the implementation to be more readable. In that case I believe in "self-documenting code" that I only use comments within the method as the last resolt – it's much better to write understandable implementation that can be fully understood without comments within the function/method body.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Great video! I think it basically boils down how religious you're. If you look at things just from the scientific view, it should be obvious that artifial womb is something that we should try to create. And I strongly believe that we'll get that technology within our lifetime because I'm currently thinking that we'll get superhuman AI within our lifetime and that will be able to solve the artificial womb technology even if we cannot do that.
The bigger ethical problem is what kind of spare parts it's okay to grow. Again, if you're religious, the answer is nearly none. And for atheists, the answer would be any, including the whole body. That will cause huge conflicts in the next couple of decades.
All that said, in long run we should decide if we still want to evolution in homo sapiens. If we allow any born people to keep living practically forever and reproduce no matter how bad their genes are, the results will be pretty poor in long run. We already have the technology to prevent any evolutionary effects on human race – the question is if we want to use that in grand scale.
For example, from evolutionary standpoint, women should have wider hips even through current fashion trends may suggest otherwise. As such, we should prevent women with narrow hips from reproducing that much because we've removed the evolutionary pressure from dying to giving birth because of having too narrow hips. However, it doesn't seem very popular that you shouldn't have kids because that would be too dangerous without modern medicine. Instead of sterilization, we allow such people to reproduce and carry the problematic genes forward. If we keep doing this for long enough, artificial womb is the only solution we have.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Increasing hours just makes things worse. Here in Finland, children are usually in the school for 6 hours a day and 5 days a week. And for those 6 hours, they get 15 minute break every hour so it's really 6 x 45 minutes per day from monday to friday. With 2.5 month holiday for the summer and 1 week holiday for the autumn, 2 week for the xmas, and 1 week holiday for the spring. Homework needs maybe 15-30 minutes per day and that's all. And Finland used to have really good results in the PISA tests around year 2005. The curriculum details have since changed a bit and results are a bit worse but the length of education or the amount of homework hasn't been changed. I'm not sure if the worse behavior is caused by minor curriculum changes or the use of smartphones in classrooms –in addition, one big change in Finland since 2005 has been integrating special education into normal classrooms without extra resources for the teacher of the normal classroom. I would guess this might be the real cause for worse results in recent years. It was done on the basis that it would improve understanding between "normal" people and "special" people but schools assumed it was cost minimization technique and just reduced staff instead of having one normal teacher and one special ed teacher per class. In practice, the normal teachers were expected to do everything they used to do and, in addition, do all the stuff the special ed teachers did previously.
The minimum education for teachers in Finland is Master's degree from university, which obviously helps, too. And Finland doesn't have any private schools in practice and parents don't get to choose the school children go because all the schools follow the same curriculum with similar requirements for teachers.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I partially agree the John Deere line with emission controls. Diesel engines do emit sooth particles and NOx emissions without special tweaks. I'd argue that sooth particles do not matter a bit on rural areas because it's only carbon and is not a problem in small amounts per area. As such, DPF can safely be bypassed. NOx emissions are problematic no matter where it's emitted around the globe so I agree that deleting NOx emission controls is a bad thing in all cases. However, that doesn't mean that engine control unit couldn't clear the error codes from the user interface. The most important NOx emission control system is EGR and the system can figure out in maybe 10 seconds if it doesn't work while the engine is running. So allow do clearing the codes but maybe reduce the engine power when EGR system has failed. That would allow completing the workday with the tractor but would put some incentive to actually fix the EGR. (Usually this requires replacing the EGR valve and I think a competent farmer could do that in 15 minutes unless John Deere has even worse design than VW car diesel engines which I assume is unlikely.)
As for the software, the problem is not the required software but the protocol between the computer and the tractor. Again, speaking with experience with VW car software, there VW has a secret protocol handshake which is required until you can speak to all controllers. Other than that, the interface follows publicly available standards with different settings channels (basically memory addresses to software engineer) and values for those channels (basically memory values to software engineer). If these protocol handshakes and channels were publicly documented, there would be no need for the OEM software. Basically the required info is along the lines: "Engine Control Unit has identifier 01, the subchannel 13 is the turbo boost, unit is absolute mbar as integer". If you want to monitor real time turbo boost, you connect to the CAN BUS (the standard part of this whole system), do the SECRET handshake, connect to unit 01 and select channel 13 and read the value. If it says e.g. 1325 you know that the current boost pressure is 325 mbar above the atmosphere or about 4.7 psi boost pressure for US readers. Reading through all channels would allow creating a backup of the current configuration and if the owner then messes something up while tuning the control unit, all values could just be restored from backup.
For VW, Audi, Skoda and Seat (all manufactured by VAG), you can use software called VCDS by Rosstech where the founder of Rosstech reverse engineered the secret handshake and created business by selling MUCH cheaper software than VAG to do every thing that the OEM programming unit can do, too. And the VCDS has superior user interface compared to official unit. As such, Rosstech is doing much better work than VAG. However, if the protocol (including the secret handshake) were public then we would see much more software developers creating tools to program VAG cars. Currently we have the official VAG tools and Rosstech VCDS, but I think company called OBDeleven has also completed the reverse engineering part to create their own tools. In the end, the secret part of the protocol DOES NOT prevent 3rd party developers from creating software, it only raises the bar to do so and increases costs for all customers.
I think John Deere should be totally okay with setup where user must unlock their tractor by requesting serial number specific unlock code to remove all secret handshakes and control unit locks. John Deere could require user to accept that warranty is void if the unlock is completed. Some Android manufacturers already do this. For example, to unlock any Sony smartphone, just follow the official instructions at https://developer.sony.com/develop/open-devices/get-started/unlock-bootloader/ - there's absolute no reason why tractors or cars couldn't work the same.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I'm strongly in the camp "write once, read multiple times" when it comes to code and its documentation. Once you accept that any single of line of code will be read more than once in long term, you should make code as readable and clear as possible. And in practice that may require adding comments. However, comments should be added if they make code more clear, not just because the boss said so. My bar for how much more clear the code would become with comments is pretty low, though. If the code is any easier to understand with extra comments, add those comments. That said, it's a balancing act because extra comments just add extra letters to read if the code would be perfectly understandable without the comment.
If you truly write temporary solution that will be thrown away in near future for real, sure. In my experience, nothing is as permanent as temporary solution that seems to work, though.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@RobBCactive Sure, the only way in long run is to have accurate API definition in machine readable form. Currently if you use the C API, you "just have to know" that it's your responsibility to do X and Y if you ever call function Z. Unless we have machine readable definition (be it in Rust or any other markup) there's no way to automate the verification that the code is written correctly.
It seems pretty clear that many kernel developers have taken the stance that they will not accept machine readable definitions in Rust syntax. If so, they need to be willing to have the required definitions with at least some syntax. As things currently stand, there are no definitions for lots of stuff and other developers are left guessing if a given part of the existing implementation is "the specification" or just a bug.
If C developers actually want that the C implementation is literally the specification, that is, the bugs are part of the current specification, too, they just need to say that aloud.
Then we can discuss if that idea is worth keeping in long run.
Note that if we had machine readable specification in whatever syntax, the C API and Rust API could be automatically generated from that specification. If that couldn't be done then that specification is not accurate enough. (And note that such specification would only define the API, not the implementation. But such API definition would need to define responsibilities about doing X or Y after calling Z which C syntax cannot do.)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
If you want fast charging at home even in here in Europe where 3-phase power is common so you can get 16A 3x400 V (19.2 kW) connection pretty easily to the car, the whole electric grid connection of your house may become limiting factor in cost. Our house has "only" 3x25A connection to the grid and while it can be increased with a contract change, the monthly bills for the beefier connection get much higher fast.
The initial grid connection for 3x25A connection costs about 1800 EUR (including all taxes) and if you want more, the beefier options are 3x35A for 2500 EUR, 3x50A for 3500 EUR, 3x80A for 5700 EUR or 3x100A for 7100 EUR. If you live further from the existing customers, expect to at least pay 20% more.
In addition to that, you get to pay monthly fees for the max current you want from your grid connection. The basic 3x25A connection costs 20 EUR/month (including all taxes) whereas, for example, 3x50A costs 64 EUR and 3x100A costs 155 EUR/month.
Of course, that 3x100A connection can only deliver 3x100x400 W or 120 kW so it's still pretty slow compared to proper fast chargers available with CCS connector. And as you can see, the costs of even 120 kW level fast charging at home gets pretty expensive indeed so it really makes little sense to fast charge at home. Going from 19.2 kW charging to 120 kW charging increases your monthly costs about 130 EUR and of course, the initial connection has extra 5000 EUR additional cost. And note that you have to pay these fees for the possibility of fast charging. The actual electricity for the charging obviously goes above the mentioned costs. Suddenly that 19.2 kW sounds pretty nice deal!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I totally agree with you except for the automatic code formatting. I strongly believe that there should be code formatting rules for a project but it shouldn't be enforced by automated process. Sure, have an automated check to tell if you've broken the rules and optionally allow automatically reformatting the new code but there will be always situations where code can be more readable by breaking the arbitrary rules your project ended up with. A basic example could be line length rules: if the code would be more readable if you go over the line length limit by 7 characters, do it. Wrapping that code on two or more lines might follow the formatting rules but it would result in less readable code.
Code readability is the most important thing. Any important piece of code will be written once but re-read many times and you have to make sure that every reader understands it the same way.
I strongly believe in self-documenting code but the bar should be that if any member of your team ever has to ask about the code, it's poorly written. If any member of the team fails to understand your code then it's not self-documenting. It's that simple. And in such cases, I always prefer fixing the code and as a last resort, I write some comments. That said, I also try to write short documentation for every function or method (DocBlock) to help people using editors that can show that documentation while modifying the code on the calling site and design-by-contract rules for the caller, without needing to even see the actual well written code. And after writing server software for a couple of decades, I've come to conclusion that all parameters should have explicit info about if the argument is untrusted (raw user input is okay) or trusted (never ever pass any unfiltered user input here). And note that raw user input may come from TCP/IP socket, file, environment string, command line argument, SQL connection or REST API request. If the bytes in the RAM can be affected by entities outside your code, it's untrusted. And untrusted data is contagious so if your programming language doesn't have something akin to Perl taint mode, you have to track untrusted data by yourself from variable to variable.
Also, a string is just stream of unknown bytes unless you know the encoding and intent. Many security issues happen because programmers fail to understand the data. For example, SQL injection attacks and XSS attacks are actually the same security problem under the hood: missing or wrong encoding for the context. In case of SQL injection attack, typical problem is using raw string when the actual context is "constant UNICODE string within a string in SQL query" and XSS attack is caused by using raw string when the actual context is "constant UNICODE string within a JavaScript string embedded in SVG embedded in data-URL embedded in attribute string embedded in HTML5 document".
Not every context can support raw binary strings but if your function or method takes untrusted string as input, it's your task to encode or otherwise make it safe. If your method cannot accept any random binary input, your method will need to test for binary crap and throw an exception or handle the problem in some other way. Remember that if you don't write this safe code, then every calling site must re-implement it or you'll have security vulnerability waiting in the code.
I'm nowadays writing my functions or methods so that any data passed in must be random binary safe unless the parameter is explicitly marked as trusted and in that case the caller takes the responsibility for data safety. And automated tests for that code should actually use random binary test strings to make sure the code doesn't bitrot in the future.
1
-
1
-
10:20 I think this viewpoint is simply false. Since good IDEs can show last commit that modified each line, you can nowadays have line accurate description of why each line exists in the source code without having human written comments in the source code!
However, if you fail to write proper commit messages (documenting why the code is needed), you can never achieve this level. And if you write proper atomic commits with proper commit messages, you always rebase and never merge your own code and everything will be fine. And if you're pulling remote branch and it can be merged conflict free you can do real merge if you really want. If there's a conflict, do not even try to make a merge but tell the submitter to rebase, test again and send another pull request.
The single biggest issue remaining after Git is handling huge binary blobs. If you want to have all the offline capabilities that Git has, you cannot do anything better but just copy all the binary blobs to every repository and if you have lots of binary blobs, you'll soon run out of storage. If you opt to having binary blobs on server only, you cannot access those in offline situations or when the network is too slow to be practical for given binary blob.
12:20 This wouldn't be a source control system, it's just a fancy backup system. The problem discussed here is total skill issue only. I personally use Git with feature branches for even single developer hobby projects and I spend maybe 10–20 seconds extra per branch total.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Great interview! The only question I would have loved to see would have been as follows:
If both Rust and C++ existed in their current state with no existing software written either language, why pick C++ over Rust today? I understand that when you have hundreds of millions of lines of existing C++ code, comparing just the language is not the only consideration you should do. However, we should be asking, which language is the best to teach to the next generation and the generation after that?
For me personally, even though I know C++ better, Rust seems like a better language in long run thanks to its memory safety and especially its data race free promise. Multithreaded programming is so hard when you mix in shared memory and allocating and freeing resources in multiple threads that it's rare that people can get that correct without a lot of support from the compiler. And Rust seems to be the only language that even tries to fully do this.
And I'm mostly interested in languages that have good enough performance. That basically rules out all garbage collection languages such as Java and C#. You only need to check the implementation of those languages to come to that conclusion: both JVM and CLR are written in C++. If Java and C# were actually generic languages, surely their own runtime systems would have been written in those languages, right? In reality, Java and C# performance is poor enough require writing the runtime in C++ (or C or Rust, but C++ was selected for historical or practical reasons).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Fast acceleration (when executed correctly) will always save fuel because you can switch to higher gears earlier. In addition, non-turbo engines are most economical on full throttle because pumping losses are minimal in that case. For turbo engines, the most economical acceleration is non-trivial.
Obviously, if your tires start to slip, you'll be losing your economy in spinning wheels so if your engine is "too powerful", you cannot run it the most economical way. And any clutch slipping you're doing will also ruin your economy because it turns engine power to heat. In practice, with manual gearbox you get the best economy by taking off with low RPM, switch to 2nd with pretty fast clutch action and then go full throttle if wheel traction is good enough. Human reaction time is just not good enough to do full throttle take off with 1st gear without tires slipping or clutch slipping a lot. And if you have computer assisted launch control, it usually uses brakes to keep traction which is obviously the least economical option of all.
Also, for max economy you would have to find BSFC (Brake Specific Fuel Consumption) for your engine and find optimal gear switch positions. Usually you should accelerate with full throttle and change gears maybe 500–1000 rpm above the peak torque, which in case of TDI engine might be near 3600 RPM (the idea is to hover near the optimum RPM point before and after each gear switch). I haven't used TSI engines myself but I'd guess those should have optimum point near 3800 RPM for upshifting.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1