Youtube comments of Scott Franco (@scottfranco1962).

  1. 1200
  2. 832
  3. 777
  4. 501
  5. 496
  6. 451
  7. 361
  8. 330
  9. I own both of these cars, a 2017 Bolt and a 2018 Tesla M3. My wife drives the Bolt, I drive the M3. We both had EVs previously, a Leaf and a Spark. What changed principally from day to day is that we don't both rush for the charger every night. My wife likes the hatchback, the fact that the Bolt is slightly smaller, and she appreciates the fact she does not have to worry about range to get anywhere around the city. She rarely if ever charges away from home. I have a long commute, 44 miles round trip, charge about every 3 days, and use chargers away from home only when we take long trips. The biggest difference between the cars is charging away from home. We went from San Fransisco to Los Angeles in both cars. In the Bolt, we were restricted to highway 101, a longer route, because the shortest and most heavily traveled route has no CCS chargers. At all. According to plugshare, it still does not. Trying to charge down the 101 is somewhat hit or miss. There "appear" to be a lot of high power chargers for CCS, but most of them are actually 1/2 power chargepoint 25kW stations that take way longer to charge. On the way down we made it with one charge, but we left fully charged, stopped for 2 hours (!) at a 50kW charger, and barely had any charge when we made it to San Fernando Valley. On the way back we hit three chargers, made worse because the hotel we stayed at had no charger, so we had to hit one on the way out. Note: if you can find a hotel with a charge spot, even only an L2, this is a huge help, since you leave the hotel fully charged. With Tesla we left fully charged, hit the Harris ranch charger even though we could have gone farther, and charged twice in LA before returning on the I5. Another charge at Harris ranch got us home. There really is no comparison between cars when it comes to on the road charging. With CCS charging you are luck if there are two 50kW charging spots, and the chances are good one of them is out of order. When you get there you fiddle with a card, and even having a card does not always help. My EVgo card works only %50 or less of the time, and they have sent me several replacement cards. Thus I waste 10 minutes at each charge calling them and setting up a charge on the account. With Tesla, we go to chargers and there are 10 spots, with some up to 20 (!) spots. You plug the car in and go. The billing is completely automatic, and the prices are reasonable. There is an odd thing going on with the A/B system, if you plug in to A, and another car is on B, or vice versa, you both get slowed down, so you pick an A-B pair that is unoccupied. Coming into a charge station at about 50 miles left (my personal minimum), you get to see the car charge at over 100kW/h for about 2 minutes to reach over 200 miles left, then less after that. Its truly a breathtaking sight to see a car charge that fast, and apparently this is unique to the model 3, which has improvements in charging speed even over the Model S. In short, we are happy with both cars, but the Bolt is clearly a local, city car, and the M3 is for long trips.
    306
  10. 243
  11. 231
  12. 214
  13. 210
  14. My favorite application for mems is in aviation, since I am a recreational pilot. One of the first and most successful application of mems was accelerometers, which don't need openings in the package to work. Accelerometers can replace gyroscopes as well as enable inertial navigation, since they can be made to sense rotation as well as movement. With the advent of mems, avionics makers looked forward to replacing expensive and maintenance intensive mechanical gyroscopes with mems. A huge incentive was reliability: a gyroscope that fails can bring down an aircraft. The problem was accuracy. Mems accelerometers displayed drift that was worse than the best mechanical gyros. Previous inertial navigation systems used expensive laser gyros that worked by sending light pulses through a spool of fibre optical line and measuring the delay due to rotation. Mems accelerometers didn't get much better, but they are sweeping all of the old mechanical systems into the trash can. So how did this problem get solved? Well, the original technology for GPS satellite location was rather slow, taking up to a minute to form a "fix". But with more powerful CPUs it got much faster. But GPS cannot replace gyros, no matter how fast it can calculate. But the faster calculation enabled something incredible: the GPS calculation could be used to calibrate the mems accelerometers. By carefully calculating the math, a combined GPS/multiaxis accelerometer package can accurately and reliably find a real time position and orientation in space. You can think of it this way: GPS provides position over long periods of time,, but very accurately, and mems accelerometers provide position and orientation over short periods of time, but not so accurately. Together they achieve what neither technology can do on its own. The result has been a revolution in avionics. Now even small aircraft can have highly advanced "glass" panels, that give moving maps, a depiction of the aircraft attitude, and even a synthetic view of of the world outside the aircraft in conjunction with terrain data. It can even tell exactly which way the wind is blowing on the aircraft because this information falls out of the GPS/accelerometer calculation.
    197
  15. 157
  16. 142
  17. 141
  18. 130
  19. 122
  20. 120
  21. 111
  22. 103
  23. 101
  24. 100
  25. I have taken some heat before when joining projects that were in bad shape and the prevailing opinion was that they wanted to start over. I say, "you'll just make the same mistakes over again". I read a great story about a professor who took a position in a college for the electrical engineering section. The lab for it was in terrible shape, the instruments were all broken. The administrator asked the new professor to make a list of needed equipment and he would see if he could find the money for it. The new professor replied "no, not a problem. We will use what we have". The administrator left, stunned. The new professor started his classes and took the new students out to the lab. Over a course of months, they took apart the broken equipment, got schematics for them, and went over what was wrong with each instrument as a group project. Slowly but surely, they got most of it working again. The students that did this because some of the best engineers the school had seen. The moral of the story is applicable to software rewrites. The team that abandons the software and starts over does not learn anything from the existing software, even if they didn't write it. They create a new big mess to replace the old big mess. Contrast that with a team that is forced to refactor the code. They learn the mistakes of the code, how to fix it, and, perhaps more importantly of all, become experts at refactoring code. In the last 2 years, I instituted a goal for myself that I would track down even "insignificant" problems in my code, and go after the hardest problems first. In that time I have been amazed at how often a "trivial" problem turned out to illustrate a deep and serious error in the code. Similarly, I have been amazed at how solving hard problems first makes the rest of the code go that much easier. I have always been a fan of continuous integration without calling it that. I simply always suspected that the longer it took to remerge a branch in the code, the longer it would take to reintegrate it, vs. small changes and improvements taking a day or so. I can't take credit for this realization. Too many times I have been assigned to merge projects that were complete messes because of the long span of branch development. As the old saw goes, the better you perform such tasks the more of it you will get, especially if others show no competence in it.
    93
  26. 81
  27. 74
  28. 73
  29. 72
  30. 70
  31. 63
  32. 60
  33. 55
  34. 54
  35. 52
  36. 51
  37. 50
  38. 49
  39. 49
  40. 47
  41. 46
  42. 41
  43. 39
  44. 39
  45. 37
  46. 36
  47. 36
  48. 35
  49. 33
  50. 32
  51. 29
  52. 28
  53. 28
  54. 28
  55. 27
  56. 26
  57. 26
  58. 25
  59. 25
  60. 25
  61. 25
  62. 24
  63. 24
  64. 24
  65. 24
  66. 24
  67. 23
  68. 23
  69. 23
  70. 23
  71. 22
  72. Great video on one of my favorite subjects. I'd like to add a couple things. First of all (as the poster below said), this history skips a very important branch of IC history, the gate array, which FPGAs (which are a namesake, the Field Programmable Gate Array). Basically gate arrays were ICs that consisted of a matrix of transistors (often termed gates) without the interconnect layers. Since transistors then, and largely even today, are patterned into the silicon wafer itself, this divided the wafer processing into two separate divisions, the wafer patterning, and the deposition of aluminum (interconnect). In short, a customer could save quite a bit of money by just paying for the extra masks needed to deposit interconnects, and take stock wafers to make an intermediate type of chip between full custom and discrete electronics. It was far less expensive than full custom, but of course that was like saying that Kathmandu is not as high as Everest. Xilinx used to have ads showing a huge bundle of bills with the caption "does this remind you of gate array design? Perhaps if the bills were on fire". Altera came along and disrupted the PLA/PAL market and knocked over the king o' them all the 22V10, which could be said to be the 7400 of the PAL market. They owned the medium scale programmable market for a few years until Xilinx came along. Eventually Altera fought back, but by then it was too late. However, Altera got the last word. The EDA software for both Xilinx and Altera began to resemble those "bills o' fire" from the original Xilinx ads, and Altera completely reversed its previous stance to small developers (which could be described as "if you ain't big, go hump a pig") and started giving away their EDA software. Xilinx had no choice but to follow suit, and the market opened up with a bang. There have been many alternate technologies to the RAM cell tech used by Xilinx, each with an idea towards permanently or semipermanently programming the CLB cells so that an external loading prom was not required. Some are still around, but what was being replaced by all that work and new tech was serial EEPROM that was about 8 pins and approximately the cost of ant spit, so they never really knocked Xilinx off its tuffet. My favorite story about that was one maker here in the valley who was pushing "laser reprogrammability", where openings in the passivation of a sea of gates chip allowed a laser to burn interlinks and thus program the chip. It was liternally PGA, dropping the F for field. It came with lots of fanfare, and left with virtual silence. I later met a guy who worked there and asked him "what happened to the laser programmable IC tech?". He answered in one word: contamination. Vaporising aluminum and throwing the result outwards is not healthy for a chip. After the first couple of revs of FPGA technology, the things started to get big enough that you could "float" (my term) major cells onto them, culminating with an actual (gasp) CPU. This changed everything. Now you could put most or all of the required circuitry on a single FPGA and the CPU to run the thing as well. This meant that software hackers (like myself) could get into the FPGA game. The only difference now is that even a fairly large scale 32 bit processor can be tucked into the corner of one. In the olden days, when you wanted to simulate hardware for an upcoming ASIC, you employed a server farm running 24/7 hardware simulations, or even a special hardware simulation accellerator. Then somebody figured out that you could lash a "sea of FPGAs" together and load a big 'ole giant netlist into it and get the equivalent of a hardware simulation, but near the final speed of the ASIC. DINI and friends were born, large FPGA array boards that cost a couple of automobiles to buy. At this point Xilinx got wise to the game, I am sure. They were selling HUGE $1000 per chip FPGAs that could not have a real end consumer use.
    21
  73. 21
  74. 21
  75. 20
  76. 18
  77. 18
  78. 18
  79. 17
  80. 17
  81. 17
  82. 16
  83. 16
  84. 16
  85. 15
  86. 15
  87. 15
  88. 14
  89. 14
  90. 14
  91. 14
  92. 14
  93. 14
  94. 14
  95. 14
  96. 13
  97. 13
  98. 13
  99. 13
  100. 13
  101. 13
  102. 13
  103. 12
  104. 12
  105. 12
  106. 12
  107. 12
  108.  @bunyu6237  I think others would supply better info than I on this subject, since I haven't been in the IC industry for decades, back in the late 1980's. At that time, the industry (reverse engineering) was moving from hand reversing to fully automated reversing. However, if you don't mind speculation, I would say there is no concrete reason why the reversing industry would not have kept up with newer geometries. The only real change would have been that its basically not possible to manually reverse these chips anymore. I personally worked on reversing a chip at about 4 generations beyond the Z80, which was not that much. At that time, blowing up a chip to the size of a ping-pong table was enough to allow you to see and reverse engineer individual transistors and connections. Having said that, I have very mixed feelings about the entire process. I don't feel it is right to go about copying others designs. I was told at the time that the purpose was to ensure compatibility, but the company later changed their story. On the plus side, it was an amazing way for me to get onboard the IC industry. There is nothing like reverse engineering a chip to give you a deep understanding of it. However, I would say I think I would refuse to do it today, or at least try to steer towards another job. For anyone who cares about why I have a relationship to any of this, I used to try and stay with equal parts of software and hardware. This was always a difficult proposition, and it became easier and more rewarding financially to stay on the software side only, which is that I do today. However, my brush with the IC industry made a huge impression on me, and still shapes a lot of what I do. For example, a lot of my work deals with SOCs, and I am part of a subset of software developers who understand SOC software design.
    12
  109. 12
  110. 11
  111. 11
  112. 11
  113. 11
  114. 11
  115. 11
  116. 10
  117. 10
  118. 10
  119. 10
  120. 10
  121. 10
  122. 10
  123. 10
  124. 9
  125. 9
  126. 9
  127. 9
  128. 9
  129. 9
  130. 9
  131. 9
  132. 9
  133. 8
  134. 8
  135. 8
  136. 8
  137. 8
  138. 8
  139. 8
  140. 8
  141. 8
  142. 8
  143. 8
  144. 8
  145. Sounds like Europe. %30 employment does not happen overnight. So lets take a real example. Mcdonalds goes full auto. I mean you order from a kiosk, the food is made entirely by machine, delivered to a slot in front. Their not far from that now, I would argue that Mcdonalds would end itself by doing that, you don't want to narrow the difference between you and a vending machine to zero. But lets say for sake of argument. Now a very small slice of the population makes a career of Mcdonalds. Some do, and I honestly love those guys, Miky'ds makes them wear ties, cause they are the store managers. They are freeking awesome. They are the ones who are going to run the world someday. No, most are kids using Mcdonalds to pay for school, and they will be out of there soon. So as valuable to society as burger flipping is, or nowadays pressing buttons on the burger flipping machine, I suspect there are more valuable careers. So they are studying for the medical profession, or insurance actuary, whatever. The way you get to %30 unemployment is to give people infinite unemployment benefits, more than they every put into the system by working. That's how Europe does it, and that's how we did it. Remember the 99's? That was the program that dramatically increased the unemployment benefits length. We ended that program, and everyone in Washington predicted disaster. So what actually happened? Turns out employment shot up. Right after they canceled the program. Now we have this "universal basic income" jazz, which means we take money from people who work (me) and give it to people who want to smoke pot all day. IE., we want to be like Europe now. Now before you dismiss me as a snot nose rightist, I lived through the 2008 crash AND a divorce AND with two kids to support AND a house to keep out of foreclosure. I had a 6 month unemployment time at the bottom of the recession. Nobody could get a job then, even in my tech fields. I used to keep spreadsheets on my finances every week, and I ran red ink for years. Whats the solution? Again, stop whining and go to work. You don't have to be the smartest worker or the hardest worker. You just have to be smarter than the other guys, and work harder than the other guys.
    7
  146. 7
  147. 7
  148. 7
  149. @SuperWhisk Its a big subject (age). It would explain why companies take efforts to figure out your age even if you are not allowed to list it. I used to have companies that would ask me for my date of birth while quickly adding "its just for identification purposes!". Perhaps this might shock you, but I think I don't blame them. If you are in or near retirement age, the company has to assume you are looking at the calendar and wondering if your keeping working is really worthwhile. Does that apply to me? I don't think so. If I were in retirement and someone gave me a remote contract with reasonable hours I would take it, even if I had to come to the company part of the time. And this arrangement seems popular. But then I like to work, and like to get out of the house on occasion. My wife of 10 years feels the same way. Is age discrimination unfair? Wellll... yes, but discrimination by skin color or similar reasons is a lot less fair. I would say trying to categorize everyone is really the issue. A short story (yea, again sorry). I worked at a place where another employee was clearly older, and likely retirement age. His boss was a friend of mine, and he ended up terminating him. I asked him about it, and he said he gave him several chances to improve his productivity, but without result. It seems to be all relative anyways. At 65, I found that unless I take a nap after lunch I can't function. I sit at my terminal and fog out. a 20 minute nap fixes that. I had one boss that had a real problem with that. At my current contract (Google), they actually have rooms to take naps in (god I love this place).
    7
  150. 7
  151. 7
  152. 7
  153. 7
  154. 7
  155. 7
  156. 7
  157. 7
  158. 7
  159. 7
  160. 7
  161. 7
  162. 6
  163. 6
  164. 6
  165. 6
  166. 6
  167. 6
  168. 6
  169. 6
  170. 6
  171. 6
  172. 6
  173. 6
  174. 6
  175. 6
  176. 6
  177. 6
  178. 6
  179. 6
  180. 6
  181. Its a good documentary. It also feels older than its date here on youtube, a lot of emphasis on the model S, not much model 3. I like the fact that they actually found and interviewed an ex-Tesla employee. American TV has been really behind the curve in showing the real face of Tesla. The german makers are stepping up to the plate with mid-price cars with real range and charge times like the e-tron. However, they have a lot of ground to make up and tesla is moving ahead again, with lower prices and a new 250kW charger vs. the 150kW/350kW chargers of the CCS family. The issue there is that Tesla is far, far ahead in charger numbers. The vast majority of CCS charging stations are 50kW or even less (its hard to tell since the best web site, plugshare, will not filter by charger wattage). Eventually, the American makers along with Europe will wake up and push chargers, but this is not really their priority right now, since the number of cars capable of 150kW or better charging is minimal. The lag was the german makers has been impressive. BMWs car series has been expensive cars with the same range, < 100 miles, as the low end EVs with no competition for Tesla whatever. The Porsche Taycan is competition for the Model S, a high end one at that, a car that is already going obsolete. Good luck finding a 350kW charger for this science project. The e-tron is more direct competition for what is becoming the main car of Tesla, the model 3, but again, is far behind on the charging curve. VW shows the most promise, but so far that is all we have seen from them, promises promises. In short, the german makers are talking now about waking up and meeting the competition from Tesla. If that is true, then the german makers need to put down the coffee and get on the train to work. Time is ticking away. One thing I think gets lost in the "car makers vs. Tesla" story is that EV makers around the world are not competing against american car makers, who are still (by comparison) fast asleep (in the back of a pickup truck). They are competing against silicon valley, and silicon valley companies MOVE. Like rapidly.
    6
  182. 6
  183. 6
  184. 6
  185. 6
  186. 6
  187. 6
  188. 5
  189. 5
  190. 5
  191. 5
  192. 5
  193. What people miss about the RISC revolution is that in the 1980s with Intel's 8086 and similar products, the increasingly complex CPUs of the day were using a technique called "microcoding", or a lower level instruction set inside the CPU to run instruction decoding, etc. It was assumed that the technique, inherited from mini and mainframe computers, would be the standard going forward, since companies like intel were increasing the number of instructions at a clip. RISC introduced the idea that if the instruction set were simplified, CPU designers could return to pure hardware designs, no microcode, and use that to retire most or all instructions in a single clock cycle. In short, what happened is the titanic turned on a dime: Intel dropped microcode like a hot rock and created pure hardware CPUs to show that any problem could be solved by throwing enough engineers at it. They did it by translating the CISC x86 instructions to an internal RISC form and deeply parallelizing the instruction flow, the so called "superscalar" revolution. In so doing they gave x86 new life for decades. I worked for SUN for a short time in the CPU division when they were going all in on multicore. The company was already almost in freefall. The Sparc design was flawed and the designers knew it. CEO Johnathan faced questioning at company meetings when he showed charts with Java "sales" presented as if it were a profit center (instead of given away for free). I came back to SUN again on contract after the Oracle merger. They had the same offices and the little Java mascots on their desks. It was probably telling that after my manager invited me to apply for a permanent position, I couldn't get it though their online hiring system, which was incredibly buggy, and then they went into a hiring freeze so it was irrelevant. I should also mention that not all companies did chip design in that era with SUN workstations. At Zilog we used racks full of MicroVaxes and Xwindow graphics terminals. I still have fond memories of laying out CPUs and chainsmoking in the late 1980s until midnight.
    5
  194. 5
  195. 5
  196. 5
  197. 5
  198. 5
  199. 5
  200. 5
  201. 5
  202. 5
  203. 5
  204. 5
  205. 5
  206. 5
  207. @D R That's a great question. In fact we could just use the altitude from the GPS system. Right now you dial in the barometric "base pressure", or pressure at sea level. This is used to calibrate the altimeter so that it delivers accurate results, I believe it is within 100 feet of accuracy (other sources say the FAA allows 75 feet of accuracy). Its a big big deal. A few feet could mean if you hit a building or pass over it. Thus when you fly, you are always getting pressure updates from the controller, because you are going to need updates that are as close as possible to the pressure in your area. So why not use the GPS altitude, which is more accurate? 1. Not everyone has a GPS. 2. Even fewer have built in GPS (in the panel of the aircraft). 3. A large number of aircraft don't re calibrate their altimeters at all. 4. New vs. old. Aircraft have been around for a long time. GPS not so much. If you know a bit about aircraft, you also know that number 3 there lobbed a nuclear bomb into this conversation. Don't worry, we will get there. First, there is GPS and there is GPS, implied by 1 and 2. Most GPS units in use are portable (in light aircraft). Long ago the FAA mandated a system based on transponders called "mode-C" that couples a barometric altimeter into the transponder. OK, now we are going into the twistly road bits. That altimeter is NOT COMPENSATED FOR BASE PRESSURE. In general the pilot does not read it, the controller does (ok most modern transponders do read it out, mine does, but an uncompensated altitude is basically useless to the pilot). The controller (generally) knows where you are, and thus knows what the compensating pressure is (no, he/she does not do the math, the system does it for them). Note that GPS had nothing to do with that mode C discussion. So for the first part of this, for a GPS to be used for altitude, the pilot would have to go back to constantly reporting his/her altitude to the controller. UNLESS! You could have a mode S transponder, or a more modern UAT transceiver. Then, your onboard GPS automatically transmits the altitude, and the position, and the speed and direction of the aircraft. Now we are into equipage. Note "onboard GPS". That means built into the aircraft. Most GPS on light aircraft are handheld, which are a fraction of the cost of built in avionics. Please lets not get into why that is, its about approved combinations of equipment in aircraft, calibration, and other issues. The mere mention of it can cause fistfights in certain circles. Ok, now lets get into number 3. If you are flying over, say, 14,000 feet, its safe to say you are not in danger of hitting any mountains, or buildings, or towers. Just other aircraft. So you don't care about pressure compensation. So the rules provide that if you are over 18,000 feet, you reach down and dial the "standard pressure" of 29.92 inches of mercury, which the FAA has decreed is "standard pressure" (the FAA also has things like standard temperature, standard tree sizes, etc. fun outfit). So what does that mean? Say you are TWA flight 1, and TWA flight 2 is headed the opposite direction, same altitude. Both read 18,000 feet. Are they really at 18,000 feet? No, but it doesn't matter. If they are going to collide, they are in the same area, and thus the same pressure. Meaning that their errors cancel. It doesn't matter that they are really at 19,123 feet, they both read the same. Thus climbing to 19,000 (by the altimeter) means they will be separated by 1,000 feet. So the short answer is the final one. The barometric system is pretty much woven into the present way aircraft work. It may change, but it is going to take a long time. Like not in my lifetime.
    5
  208.  @DumbledoreMcCracken  I'd love to make videos, but I have so little free time. Allen was a true character. The company (Seagate) was quite big when I joined, but Allen still found the time to meet with small groups of us. There were a lot of stories circulating... that Allen had a meeting and fired anyone who showed up late because he was tired of nobody taking the meeting times seriously, stuff like that. He is famous for (true story) telling a reporter who asked him "how do you deal with difficult engineers".. his answer "I fire them!". My best story about him was our sailing club. I was invited to join the Seagate sailing club. They had something like a 35 foot Catalina sailboat for company use, totally free. We ended up sailing that every Wednesday in the regular race at Santa Cruz Harbor. It was owned by Allen. On one of those trips, after much beer, the story of the Segate sailboat come out. Allen didn't sail or even like sailboats. He was a power boater and had a large yacht out of Monterrey harbor. He rented a slip in Santa Cruz, literally on the day the harbor opened, and rented there since. The harbor was divided in two by a large automobile bridge that was low and didn't raise. The clearance was such that only power boats could get through, not sailboats (unless they had special gear to lower the mast). That divided the harbor into the front harbor and back harbor. As more and more boats wanted space in the harbor, and the waiting list grew to decades, the harbor office came up with a plan to manage the space, which was "all power boats to the back, sailboats to the front", of course with an exception for working (fishing) boats. They called Allen and told him to move. I can well imagine that his answer was unprintable. Time went on, and their attempts to move Allen ended up in court. Allen felt his position as a first renter exempted him. The harbor actually got a law passed in the city to require sailboats to move to the back, which (of course) Allen termed the "Allen shugart rule". Sooooo.... comes the day the law goes into effect. The harbormaster calls Allen: "will you move your boat". Allen replies: "look outside". Sure enough, Allen moved his yacht to Monterrey and bought a used sailboat which was now in the slip. Since he had no use for it, the "Seagate sailing club" was born. It was not the end of it. The harbor passed a rule that the owners of boats had to show they were using their boats at least once a month. Since Allen could not sail, he got one of us to take him out on the boat, then he would parade past the Harbormaster's office and honk a horn and wave. Of course Allen also did fun stuff like run his dog for president. In those days you either loved Allen or hated him, there was no in-between. I was in the former group, in case you could not tell. I was actually one of the lucky ones. I saw the writing on the wall, that Segate would move most of engineering out of the USA, and I went into networking for Cisco at the time they were still throwing money at engineers. It was a good move. I ran into many an old buddy from Seagate escaping the sinking ship later. Living in the valley is always entertaining.
    5
  209. 5
  210. 5
  211. 5
  212. 5
  213. 5
  214. 5
  215. 5
  216. 5
  217. 5
  218. 5
  219. 5
  220. 5
  221. 5
  222. 5
  223. 4
  224. 4
  225. 4
  226. 4
  227. 4
  228. 4
  229. 4
  230. 4
  231. 4
  232. 4
  233. 4
  234. 4
  235. 4
  236. 4
  237. 4
  238. 4
  239. 4
  240. 4
  241. 4
  242. 4
  243. 4
  244. 4
  245. 4
  246. 4
  247. 4
  248. 4
  249. 4
  250. 4
  251. 4
  252.  @tcmtech7515  I do hear that a lot... from people who don't own an EV. The standard refrain is that "I'll buy an EV when it gets 400 miles range", even though 200 miles was a common range for gas cars before high mileage hybrids. 400 miles, or likely 100kWh batteries, is an inefficient battery to haul around every day when average needs are 50 miles a day or so. It comes down to the story people tell themselves. "30 minutes for a fast charge is too much" means to me that people are trying to fit EVs into what they know about gas cars. I have owned EVs since 2013, and I average 2-3 times per YEAR when I wait at a charger. The rest is home charging when I don't really track, know about or CARE much about how long the car took to charge. Everyone I know who actually GOT an EV is pretty much the same. All the concerns they had before getting an EV are gone AFTER they actually get one. I went from leasing a leaf, to a Bolt that my wife now drives, to my Tesla M3. All were practical, but all differed in terms of range and convenience. A 75 mile range leaf was practical as a commuter car, but not for long trips, and took planning to be able to run significant errands during lunch. I charged it every night, and would have issues if I forgot to plug it in at night, usually requiring a stop off at a supercharger. The bolt changed that model to only needing to plug in 2-3 times a week, and made long trips possible, if not convenient. It took it from San Francisco to LA once after I got it, which required planning and some fairly major stops to charge along the way. With the Tesla M3 there is no issue with long distance driving for me at all. I don't drive for 300 miles without stopping, which is 4 hours even at California highway speeds, so the 20-30 minutes it takes to charge can be taken during lunch or dinner breaks, and nowadays I pull into Tesla highway charging centers with 10 to 20 charging spots, most unoccupied.
    4
  253. 4
  254. 4
  255. 4
  256. 4
  257. 4
  258. 4
  259. 4
  260. 4
  261. Good job. There was a bit of conflation there with microcode (its firmware?). It would have helped to underline that it is entirely internal to the chip and operates the internals of the CPU. In any case, microcode was discarded with the Pentium series, KINDA. It actually lives on today in so called "slow path" instructions like block moves in the later cpus, which use microcode because nobody cares if they run super fast or not, since they are generally only used for backwards compatibility and got deprecated in 64 bit mode. I await the second half of this! Things took off again with the AMD64 and the "multicore wars". Despite the mess, the entire outcome probably could have been predicted on sheer economic grounds, that is, the market settling into a #1 and #2 player with small also-rans. Today's desktop market, at least, remains in the hands of the x86 makers except for the odd story of Apple and the M series chips. Many have pronounced the end of the desktop, but it lives on. I have many or even most colleges who use Apple macs as their preferred development machines, but, as I write this, I am looking out at a sea of x86 desktop machines. Its rare to see a mac desktop, and in any case, at this moment even the ubiquitous Mac pro laptops the trendsetters love are still x86 based, although I assume that will change soon. Me? Sorry, x86 everything, desktop and laptop(s). At last count I have 5 machines running around my house and office and 4 laptops. I keep buying Mac laptops and desktops, cause, you know, gotta keep up with things, but they grow obsolete faster than a warm banana. Yes, I had power PC Macs, and yes they ended up in the trash. And yes, I will probably buy Mac M2s at some point.
    4
  262. 4
  263. 4
  264. 4
  265. 4
  266. 4
  267. 4
  268. 4
  269. 4
  270. 4
  271. 4
  272. 4
  273. 4
  274. 4
  275. 4
  276. 4
  277. 4
  278. 4
  279. 4
  280. 4
  281. 4
  282. 4
  283. 4
  284. 4
  285. 3
  286. 3
  287. 3
  288. 3
  289. 3
  290. 3
  291. 3
  292. 3
  293. 3
  294. 3
  295. 3
  296. 3
  297. 3
  298. 3
  299. 3
  300. 3
  301. 3
  302. 3
  303. I think of software design as a parallel to architecture. It has merged with art a bit, and heavy on engineering. There are objectively "good" buildings and "bad" buildings, but over the centuries, we have come to understand that poorly designed buildings fall down and kill people, a lot of them. Software today is divided into life critical applications and non-life critical applications. I have worked on both (medical applications). The problem is that there is not enough recognition that software projects fall down. Our complexity is simply out of control, and many projects end when the software has too many bugs and not enough understanding. Programmers move on; the code was not that well understood to begin with. Most software isn't designed to be read. Printed out, its only useful in the toilet, which dovetails nicely with today's idea that software should not be printed. In the old days (1960s era), it was common to keep programs in printed form, usually annotated by the keeper. If I dare to suggest that a given bit of code is ugly, I am told that nobody is ever going to look at it, and it is going to be discarded shortly in any case. If we are engineers, we are a funny bit. Electronic engineers don't produce schematics that are messes of spaghetti without much (or any) annotation. Same with mechanical engineers, or (say) architects. I'd like to say that software is a new science, and we are going to evolve out of this phase, but I don't think I will see it in my lifetime.
    3
  304. 3
  305. 3
  306. 3
  307. 3
  308. 3
  309. 3
  310. 3
  311. 3
  312. 3
  313. 3
  314. 3
  315. 3
  316. 3
  317. 3
  318. 3
  319. 3
  320. 3
  321. 3
  322. 3
  323. 3
  324. 3
  325. 3
  326. 3
  327. 3
  328. 3
  329. 3
  330. 3
  331. 3
  332. 3
  333. 3
  334. 3
  335. 3
  336. 3
  337. 3
  338. 3
  339. 3
  340. 3
  341. 3
  342. 3
  343. 3
  344. 3
  345. 3
  346. 3
  347. 3
  348. 3
  349. 3
  350. 3
  351. 3
  352. 3
  353. 3
  354. 3
  355. 3
  356. 3
  357. 3
  358. 3
  359. 3
  360. 3
  361. 3
  362. 3
  363. 3
  364. 3
  365. 3
  366. 3
  367. 3
  368. 3
  369. 3
  370. 3
  371. 3
  372. 3
  373. 3
  374. 3
  375. 3
  376. 3
  377. 3
  378. 3
  379. 3
  380. 3
  381. 2
  382. 2
  383. 2
  384. 2
  385. 2
  386. 2
  387. 2
  388. 2
  389.  @lorabex791  1. I work at Google, so no. As in they may be trying to do it somewhere, but not here. We still use it. Whatever people think of C++, they worked very hard to make sure it was efficient. 2. Drivers are different than kernel code. The majority of drivers are created in industry (I have worked on both disk drivers and network drivers), and a lot of it is C++ nowadays, which is the preference. There are a lot of Windows drivers in C++, and it is customary to borrow code between windows and Linux drivers. 3. I wasn't talking about replacing any code. This is about new drivers. I don't have a dog in this fight. As a driver/low level guy I do most of my work in C, but increasingly I have to do C++ for higher level stuff. Google loves C++ (despite what you heard). Rust is "said" to be gaining traction at Google, and I have done deep dives in Rust, so I'm not (completely) ignorant in it. Any language that claims to be a general purpose language, IMHO, has to have the following attributes: 1. Generate efficient code. 2. Interoperate with C since that is what virtually all operating systems today are built in. This includes calling into C and callbacks from C (I don't personally like callbacks, but my opinion matters for squat these days). [1] Rust is fairly good at calling C, and gets a D grade for callbacks. Golang is basically a non-starter, because they have such an odd memory model that interactions with C are expensive. Any language can interoperate. Python can call into C, and its an interpreter. Its about efficientcy. Obviously its a moot point at this time, since "Linus an't a gonna do it", but it should be discussed. C++ is too important a language to ignore, especially if rust gets a pass. Notes: 1. There are some rocks in the stream of C, including zero terminated strings and fungible pointers (pointer parameters that are NULL for "no parameter" and small integers for "not a pointer"). Most languages outside of C do not support some of all of this. These are bad habits in C and are eschewed these days, see the various arguments over strcmp vs. strncmp.
    2
  390. 2
  391. 2
  392. 2
  393. 2
  394. 2
  395. 2
  396. 2
  397. 2
  398. 2
  399. 2
  400. 2
  401. 2
  402. 2
  403. 2
  404. 2
  405. 2
  406. 2
  407. 2
  408. 2
  409. 2
  410. 2
  411.  @ttb1513  The last company I work at silicon design for, Seagate, was emblematic of the basic problems with ASICs. It was 1995 and they were new at ASIC design in house. And they weren't very good at it. It was random logic design before Verilog. An example of this was our timing verification. We had timing chains that were too long for the cycle time, and thus they simply alllowed for the fact that the signal would be sampled in the next clock cycle. Now if you were an ASIC designer back then, what I just said would have made you reach for the tums, if not a 911 call for cardiac arrest. Its an open invitation to metastability. And indeed, our AT&T fab guys were screaming at us to stop that. I got put in charge of hardware simulation for the design, and I have detailed this fiasco in these threads before, so won't go over it again. The bottom line was that ASIC process vendors were loosing trust in their customers to perform verification. The answer was that they included test chains in the designs that would automatically verify the designs at the silicon level. It mean that the silicon manufactured design would be verified, that is, defects on the chip would be verified regardless of what the design did. My boss, who with the freedom of time I can now certify was an idiot, was ecstatic over this new service. It was a gonna fixa all o' de problems don't ya know? I pointed out to him, pointlessly I might add, that our design could be total cow shit and still pass these tests with flying colors. It was like talking to a wall. In any case, the entire industry went that way. Designs are easier to verify now that the vast majority of designs are in Verilog. I moved on to software only, but I can happily say that there are some stunning software verification suites out there, and I am currently working on one, so here we are.
    2
  412. 2
  413. 2
  414. 2
  415. 2
  416. 2
  417. 2
  418. 2
  419. 2
  420. 2
  421. 2
  422. 2
  423. 2
  424. 2
  425. 2
  426. 2
  427. 2
  428. 2
  429. 2
  430. 2
  431. 2
  432. 2
  433. 2
  434. 2
  435. 2
  436. 2
  437. 2
  438. 2
  439. 2
  440. 2
  441. 2
  442. 2
  443. 2
  444. 2
  445. 2
  446. 2
  447. 2
  448. 2
  449. 2
  450. 2
  451. 2
  452. 2
  453. 2
  454. 2
  455. 2
  456. 2
  457. 2
  458. 2
  459. 2
  460. 2
  461. 2
  462. 2
  463. 2
  464. 2
  465. 2
  466. 2
  467. 2
  468. 2
  469. 2
  470. 2
  471. 2
  472. 2
  473. 2
  474. 2
  475. 2
  476. 2
  477. 2
  478. 2
  479. 2
  480. 2
  481. 2
  482. 2
  483. 2
  484. 2
  485. 2
  486. 2
  487. 2
  488. 2
  489. 2
  490. 2
  491. 2
  492. 2
  493. 2
  494. 2
  495. 2
  496. 2
  497. 2
  498. 2
  499. 2
  500. 2
  501. 2
  502. 2
  503. 2
  504. 2
  505. 2
  506. 2
  507. 2
  508. 2
  509. 2
  510. 2
  511. 2
  512. 2
  513. 2
  514. And so the conversation degenerates. What a surprise. Let me tell you a story. I grew up in LA. Once I was walking down the sidewalk, and three black kids were walking the other direction. I moved to the right side of the sidewalk, and they formed a line to walk abreast, blocking the sidewalk, and then some. I stepped into the street and waited for them to pass, glaring at me as they did so. This was the 1960s, a bad time in LA and elsewhere. I was 10 years old, so you will forgive me if I didn't understand the meaning of it all. But then, the kids who wanted to teach me a lesson about black rights were, in LA, highly unlikely to have experienced the need to cross the street to the other side to avoid a white person, which happened in the deep south. So you will forgive me for pointing this out, but this was an example of people protesting a problem they had never experienced, and putting that protest against a person, me, who never had done anything like that, and in fact didn't even know what was going on. The reason I recount the story is that it is very emblematic of what is going on now. What we see on TV now is the rioters (and they are rioters) burning down businesses. And if your channel feels the need, you will also see the aftermath. A sad man or woman walking through a burned out building and talking about how they will (or will not) rebuild their family business. But they are capitalists right? They have it coming. Scrape the politics off that argument and you will see small business owners who lost their livelihood. Some of them black (as if that matters). So whats wrong with a tooth for a tooth? After such an unfortunate event as we witnessed --- fair to shoot down white people at random? Or less extreme, burn their houses? Their cars? No, its businesses. That store. The car dealership. Burning and looting is reparations. And on and on. So why then? Because they are easy targets. They are downtown, and nobody (or most people) are going to have sympathy for them, because they are rich capitalists, right? Actually, you would not be wrong for thinking that in general. Smart business owners have insurance. But at the end of the day, they are going to move out. Use the insurance to start again -- elsewhere.They have been punished for things they had no part in (unless capitalism itself is guilt). So they are going to take it as an act of god and get out of town. Now ask yourself who is helped here? Do business owners have a responsibility to rebuild and take it? Should they just accept higher insurance premiums? Is a large company, a grocery store, legally obligated to do business in the public interest?
    2
  515. 2
  516. 2
  517. 2
  518. 2
  519. 2
  520. 2
  521. 2
  522. 2
  523. 2
  524. 2
  525. 2
  526. 2
  527. 2
  528. 2
  529. 2
  530. 2
  531. 2
  532. 2
  533. 2
  534. 2
  535. 2
  536. 2
  537. 2
  538. 2
  539. 2
  540. 2
  541. 2
  542. 2
  543. 2
  544. 2
  545. 2
  546. 2
  547. 2
  548. 2
  549. 2
  550. 2
  551. 2
  552. 2
  553. 2
  554. 2
  555. 2
  556. 2
  557. 2
  558. 2
  559. 2
  560.  @AlexanderSylchuk  Oh, you are begging for my favorite story. I interviewed with a company that made precision flow valves. These were mechanical nightmares of high precision that accurately measured things like gas flow in chemical processes. This is like half the chemical industry (did you know a lot of chemical processes use natural gas as their feed stock?). Anyways, what has that got to do with this poor programmer? Well, like most industries they were computerizing. They had a new product that used a "bang bang" valve run by a microprocessor. A bang bang valve is a short piston that is driven by a solenoid that when not energized, is retracted by a spring and opens a intake port and lets a small amount of gas into a chamber. then the solenoid energizes, pushes the piston up and the gas out another port. Each time the solenoid activates, a small amount of gas is moved along. Hence the "bang bang" part. If you want to find one in your house, look at your refrigerator. Its how the Freon compressor in it works. Ok, well, that amount of gas is not very accurately measured no matter how carefully you machine the mechanism. But, it turns out to be "self accurate", that is, whatever the amount of gas IS that is moved, it is always the same. The company, which had got quite rich selling their precision valves, figured they could produce a much cheaper unit that used the bang bang valve. So they ginned it up, put a compensation table in it so the microprocessor could convert gas flows to bang bang counts, and voila! ici la produit! It worked. Time to present it to the CEO! The CEO asks the engineers "just how accurate is it?" Engineer says: well... actually it is more accurate than our precision valves. And for far cheaper. The story as told me didn't include just how many drinks the CEO needed that night. So the CEO, realizing that he had seen the future, immediately set into motion a plan to obsolete their old, expensive units and make the newer, more accurate and cheaper computerized gas flow valves. Ha ha, just kidding. He told the engineers to program the damm thing to be less accurate so that it wouldn't touch their existing business. Now they didn't hire me. Actually long story, they gave me a personality test that started with something like "did you love your mother", I told them exactly where, in what direction, and how much force they could use to put their test and walked out. I didn't follow up on what happened, mainly because I find gas flow mechanics to be slightly less interesting than processing tax returns. But I think if I went back there, I would have found a smoking hole where the company used to be. And that is the (very much overly long) answer to your well meaning response.
    2
  561. 2
  562. 2
  563. 2
  564. 2
  565. 2
  566. 2
  567. 2
  568. 2
  569. 2
  570. 2
  571. 2
  572. 2
  573. 2
  574. 2
  575. 2
  576. 2
  577. 2
  578. 2
  579. 2
  580. The technology is tied to the state of battery technology. The reason that EVs are impractical for aircraft use is the weight of the batteries. Not only do they weight much more than the fuel they replace, but, unlike liquid fuel, they don't get lighter as they fly due to consumption of the fuel. If/when batteries improve significantly, that calculus will change. The other factors are: 1. Noise. Most of the noise in an aircraft is the blade noise. This is why (for example) yard leaf blowers aren't really much quieter than gas powered ones. 2. Reliability. Sorry, they comments here are just wrong. The reliability of electric motors is far greater than piston engines, and compares more directly to turbine engines. 3. Power. The horsepower output of an electric engine is significantly greater than a piston engine, and exceeds even a turbine engine per weight. This allows an electric aircraft to use multiple engines and thus gain even better reliability. Again, it mainly depends on battery technology advances. Most of the stock bets on Evtols have been about the idea that batteries will significantly improve in a short time. They actually have. They have about doubled in the last two decades, but then the battery chemistry has changed radically, from lead-acid to lithium. It is not certain we will see that much improvement in the next two decades. Also, the majority of research being done now is to reduce the cost of batteries, not so much to reduce the weight, which is not as important for ground vehicle use.
    2
  581. 2
  582. 2
  583. 2
  584. 2
  585. 2
  586. 2
  587. 2
  588. 2
  589. 2
  590. 2
  591. 2
  592. 2
  593. 2
  594. 2
  595. 2
  596. 2
  597. The problem in Unix/Linux is that, for a host of reasons, users are encouraged to take root permissions. Sudo is used way to often. This breaks down into two basic issues. First, users are encouraged to install programs with privileged files or files in privileged areas. Second, in order to fix up problems, it is often necessary to modify system privileged files. The first issue is made far worse by effectively giving programs that are installing themselves global privilege to access system files and areas. Its worse because the user often does not even know what areas or files the program is installing itself in. The first issue is simple to solve: install everything a user installs local to that user. Ie., in their home directory or a branch thereof. The common excuses for not doing this is that "it costs money to store that, and users can share", or "all users can use that configuration". First, the vast majority of Unix/Linux system installations these days are single user. Second, even a high end 1tb M2 SSD cost 4 cents per gigabyte, so its safe to say that most apps won't break the bank. This also goes to design: a file system can easily be designed to detect and keep track of duplicated sectors on storage. The second issue is solved by making config files or script files that affect users local, or having an option to be local, to that particular user. For example, themes on GTK don't need to be system wide. They can be global to start but overriden locally, etc. A user only views one desktop at a time. The configuration of that desktop does not need to be system wide. My ultimate idea for this, sorta like containers, is to give each user a "virtual file system", that is, go ahead and give each user a full standard file tree, from root down, for Unix/Linux, BUT MAKE IT A VIRTUAL COPY FOR THAT USER. Ie, let the user scribble on it, delete files, etc., generally modify it, but only their local copy of it. The kernel can keep track of what files are locally modified by that user account, akin to copy on write paging. You can even simulate sudo privileging so that the system behaves just like straight Unix/Linux, but only modifies local copies, etc.
    2
  598. 2
  599. 2
  600. 2
  601. 2
  602. 2
  603. 2
  604. 2
  605. 2
  606. 2
  607. 2
  608. 2
  609. 2
  610. 2
  611. 2
  612. 2
  613. 2
  614. 1
  615. 1
  616. This is the liberal view of history. The "solid south" existed up until the 1960s, and republicans were active in civil rights legislation throughout that time. The party that changed were the Democrats, who came to a head with George Wallace (yes, democrat). LBJ went against his party against much of his own party and joined with the Republicans to pass the civil rights act, and what was effectively an apartheid state in the south collapsed. Democrats did an about face and changed from being the party to suppress blacks to being the party of socialism, with a laundry list of social engineering acts modeled after what FDR attempted to do during the great depression. This is the important point. The republicans believed, and believe in basic freedoms and limited government throughout the time from Lincoln to the present (Lincoln believing that the basic freedoms of man extended to blacks as well). The Democrats believe virtually all problems have a government answer, as expressed through FDR and then the rise to socialism with LBJ. This brought a backlash with Reagan and a huge rollback of the regulative state, and a huge boost in the economy. With the war, Obama and a huge snap back to socialism and fall in the economy, once again the forces on the right are feeling revolutionary. However, unlike the last time when there were clear leaders, we have Trump. Trump is a symptom of the deep divisions in this country. The democrats have done the USA a great favor by all but declaring intent to carry the flag of the socialist party (when Obama was first running for office, the head of the French socialist party declared that "the way you run as a socialist in America is by proclaiming you are not a socialist"). This fight, now so clearly defined between socialism and capitalism, big government and limited government, will come to a head, but I suspect not in this election, which is more like Huey Long vs. Teddy Roosevelt than Reagan vs. Carter. Perhaps the next election.
    1
  617. 1
  618. 1
  619. 1
  620. 1
  621. 1
  622. 1
  623. 1
  624. 1
  625. 1
  626. 1
  627. 1
  628. 1
  629. 1
  630. 1
  631. 1
  632. 1
  633. 1
  634. 1
  635. 1
  636. 1
  637. 1
  638. 1
  639. 1
  640. 1
  641. 1
  642. 1
  643. 1
  644. 1
  645. 1
  646. 1
  647. 1
  648. 1
  649. I resonate with your comments about being given low value work. I left a job at Intel where they tried to downgrade my position from a developer to tester and I refused. It was during Covid, and I got two contracts back to back with Apple and Meta. High paying and seemed good at the time, but in both cases it became obvious after starting the work that they had put me on low level work, debugging and testing, apparently with the idea that someone with what was clearly a high level developer background should be able to handle it easily. This was of course true, and I wasn't in a position to drop the contract(s), even though both companies had clearly misrepresented the position. Long story short, both contracts ended with the complaint that I wasn't working fast enough for their tastes. And more interesting, better, developer level jobs in both companies were interested in interviewing me during time there, and I suspect rapidly disappeared when they found out that I was currently at a below developer position in the company despite my resume. After the end of both contracts, I got a job with a startup last year that gave me valuable developer work, abet with a pay cut. I'm already drawing SSI and have an overpriced silicon valley house I can sell to get out of here, so the next move for me is to just quit working and move if the bottom falls out again. you kids have my every sympathy for what is going on now, but I will say that I have been a silicon valley engineer for 40+ years, and times when the employers have the upper hand has come and gone before. It will go again and the companies will realize that they got sold a turkey with the "AI revolution" just wait it out.
    1
  650. 1
  651. 1
  652. 1
  653. 1
  654. 1
  655. 1
  656. 1
  657. 1
  658. 1
  659. 1
  660. 1
  661. 1
  662. 1
  663. 1
  664. 1
  665. 1
  666. 1
  667. 1
  668. 1
  669. 1
  670. 1
  671. 1
  672. 1
  673. 1
  674. 1
  675. 1
  676. 1
  677. 1
  678. 1
  679. 1
  680. 1
  681. 1
  682. 1
  683. 1
  684. 1
  685. 1
  686. 1
  687. 1
  688. 1
  689. 1
  690. 1
  691. 1
  692. 1
  693. 1
  694. 1
  695. 1
  696. 1
  697. 1
  698. 1
  699. 1
  700. 1
  701. 1
  702. 1
  703. 1
  704. 1
  705. 1
  706. 1
  707. 1
  708. 1
  709. 1
  710. 1
  711. 1
  712. 1
  713. 1
  714. 1
  715. 1
  716. 1
  717. 1
  718. 1
  719. 1
  720. 1
  721. 1
  722. 1
  723. 1
  724. 1
  725. 1
  726. 1
  727. 1
  728. 1
  729. 1
  730. 1
  731. 1
  732. 1
  733. 1
  734. 1
  735. 1
  736. 1
  737. 1
  738. 1
  739. 1
  740. 1
  741. 1
  742. 1
  743. 1
  744. 1
  745. 1
  746. 1
  747. 1
  748. 1
  749. 1
  750. 1
  751. 1
  752. 1
  753. 1
  754. 1
  755. 1
  756. 1
  757. 1
  758. 1
  759. 1
  760. 1
  761. 1
  762. 1
  763. 1
  764. 1
  765. 1
  766. 1
  767. 1
  768. 1
  769. 1
  770. 1
  771. 1
  772. 1
  773. 1
  774. 1
  775. 1
  776. 1
  777. 1
  778. 1
  779. 1
  780. 1
  781. 1
  782. 1
  783. So Ford loses money on each car. And Tesla should as well, right? But Elon musk has made billions from Tesla. Its almost like Tesla knows what they are doing and Ford does not, right? As an EV owner since 2013, I don't want the government to subsidize EVs, but then I never did. But then it also makes sense for the government to STOP keeping BYD OUT of the American market. The Ford and GM subsidies are PROTECTIONISIM and they are what is allowing Ford and GM to SUCK at making EVs while still staying in business. BYD is making cheap EVs for the world, the world outside of the USA. And keeping BYD out is part of the "we are stopping the bad Chinese" meme Biden (and before him Trump) are selling you. That bad Chinese company is selling you cheap cars simply because they are evil. Right. Let the f***king market work and things will get better. Ford and GM may not survive, but they should be allowed to stand for fall of their own accord. Your government has shoved a truly impressive amount of money into Ford and GM to "keep American jobs", which is people making like $50 and hour to put a screw in a car. THAT's why Ford is losing money. Tesla is standing on their own and competing head to head with BYD in their own market. And they could do it without subsidies. They do it by offering a better product. What a concept. I have had both a GM Bolt and a Tesla. Bolt is a nice car, but falls far short of a Tesla for the equivalent price. GM sucks at software, and I know why personally, I have seen their operations.
    1
  784. 1
  785. 1
  786. 1
  787. 1
  788. 1
  789. 1
  790. 1
  791. 1
  792. 1
  793. 1
  794. 1
  795. 1
  796. 1
  797. 1
  798. 1
  799. 1
  800. 1
  801. 1
  802. 1
  803. 1
  804. 1
  805. 1
  806. 1
  807. 1
  808. 1
  809. 1
  810. 1
  811. 1
  812. 1
  813. 1
  814. 1
  815. 1
  816. 1
  817. 1
  818. 1
  819. 1
  820. 1
  821. 1
  822. 1
  823. 1
  824. 1
  825. 1
  826. 1
  827. 1
  828. The 8086 series was a looser from the get-go. Software professionals preferred the cleaner architecture of the 68000 series, which did away with segmentation (a disease more than a technology). Indeed, the 8086 was in fact failing in the marketplace until IBM "rescued" it by using it in the IBM PC. That was the first crisis for the x86 family. The thing about segmentation is it is impossible to hide from higher level software. Even the language has to adapt to that hack, and indeed, C of the day had special features just to adapt to it. The result was that software was divided into x86 specific software and non-x86 software. Intel was happy about this because their users were non-portable. They doubled down on segmentation long after the size of chips enabled better architectures with the 80286, which could be explained as "you will like eating sh*t if we make it a standard". IBM again propped up the x86 family by releasing the IBM-PC/AT, which, even though it used the 80286, never saw wide use of a 80286 enabled operating system (cough OS/2). This carried x86 into the age of RISC. The x86 family entered its second crisis as improved 68000 processors, the Sparc and other RISC processors nipped at its heels. The introduction of the 80386 saved Intel, got rid of segmented mode, and allowed a Unix implementation of x86 for the first time. The next crisis for the x86 was when the series fell behind RISC processors in performance. Intel pulled off a genuine coup de gras by rearchitecting the x86 as an internal RISC that translated the crappy x86 instruction set to "ROPs" or internal RISC operations and made the CPU superscalar with Pentium. The final crisis for the x86 was when Intel tried to dump the dog of a CPU with Itanium, only to 180 again and join AMD with the "hammer" AMD64 bit arch. Now we are in the age of RISC-V. The x86 has become like a case of herpes, bad, but livable, and seemingly never to go away. We'll see.
    1
  829. 1
  830. 1
  831. 1
  832. 1
  833. 1
  834. 1
  835. 1
  836. 1
  837. 1
  838. 1
  839. 1
  840. 1
  841. 1
  842. 1
  843. 1
  844. 1
  845. 1
  846. 1
  847. 1
  848. 1
  849. 1
  850. 1
  851. 1
  852. 1
  853. 1
  854. 1
  855. 1
  856. 1
  857. 1
  858. 1
  859. 1
  860. 1
  861. 1
  862. 1
  863. 1
  864. 1
  865. 1
  866. 1
  867. 1
  868. 1
  869. 1
  870. 1
  871. 1
  872. 1
  873. 1
  874. 1
  875. 1
  876. 1
  877. 1
  878. This is a fairly vacuous comparison. None of the BRICS nations are going to unite to form a force, and are in fact more likely to start wars with each other. Further, the real comparison, GDP per capita, is unmentioned here for the obvious reason: it is stacked in the favor of the USA, which is $57k, vs. Brazil $15k, Russia 26k, China $15k and South Africa $13k. The USA spends more money than other countries on defense because we have better technology than other forces and are more effective than others, again per capita, and that relationship to per capita GDP is not an accident. Finally, you have to consider the force equations of each of the respective countries. Russia and India devote a lot of their forces to counter china, but not each other, since they don't share a land border. China counters the USA in the pacific, but probably still reserves the majority of their forces for Russia and India, their bordering states. Brazil could effectively do without any military, since the only credible threat is Argentina, and nobody would take Brazil's territory even if it were offered free. South Africa has a large military but no credible enemies, which leads you to understand the real reason their large military exists, to control their own people. India does not consider the USA as a threat, quite the contrary they consider the USA an ally. Thus the net force facing the USA is China and Russia, and neither state seriously considers the USA to be an invasion threat. Thus I sleep well at night.
    1
  879. 1
  880. 1
  881. 1
  882. 1
  883. 1
  884. 1
  885. 1
  886. 1
  887. 1
  888. 1
  889. 1
  890. 1
  891. 1
  892. 1
  893. 1
  894. 1
  895. 1
  896. 1
  897. 1
  898. 1
  899. 1
  900. 1
  901. 1
  902. 1
  903. 1
  904. 1
  905. 1
  906. 1
  907. 1
  908. 1
  909. 1
  910. 1
  911. 1
  912. 1
  913. 1
  914. 1
  915. 1
  916. 1
  917. 1
  918. 1
  919. 1
  920. 1
  921. 1
  922. 1
  923. 1
  924. 1
  925. 1
  926. 1
  927. 1
  928. 1
  929. 1
  930. 1
  931. 1
  932. 1
  933. 1
  934. 1
  935. 1
  936. hydrogen, hydrogen. First of all, hydrogen is not a fuel. It is simply a way to transfer energy. Second, by the time you solve all the issues with practical use of it in aircraft, pressurized vessels, the resulting weight of such vessels, insulating them, not being able to put them in wings, etc, etc., it does not look so attractive anymore. By the time the technology advances, electric batteries for aircraft use are going to be just as far along, if not farther. I suspect decarbonisasion of aircraft is going to proceed by dividing short haul from long haul aircraft, electrifing the former, then using a solution like plant derived fuel for the latter. The airline industry has been trained to think only in terms of long haul. If I want to go from my house, in San Jose, CA, to my kids house in Eugene, Oregon, a search on the airline sites gives a path through Denver. So I would need to travel half the length of the USA, then back again, just to reach another destination on the same coast. If you take out the hub and spoke model and stop treating people like cattle being shipped to market, and get people from their ACTUAL hometowns to their ACTUAL destinations (not to and from a HUB AIRPORT), you can do it with less travel time, less energy, slower aircraft, and make customers happy. There WAS an airline providing service from here to Eugene. Directly. From Alaska airlines. It used turboprop aircraft, and perhaps took twice as long as a jet would, but certainly less time than a tour of Denver would (and waiting for a connecting flight there). And that turboprop aircraft is possible to replace with a pure electric aircraft, including batteries in the wings. The airline industry is like a shoe company that only sells size 6 shoes and works by hammering the shoes onto your feet until they fit (and then you limp out of there).
    1
  937. 1
  938. 1
  939. 1
  940. 1
  941. 1
  942. 1
  943. 1
  944. 1
  945. 1
  946. 1
  947. 1
  948. 1
  949. 1
  950. 1
  951. 1
  952. 1
  953. 1
  954. 1
  955. 1
  956. California is an island, geographically and politically. I'm currently sitting in Kauai, which does not appear to care about waste of either water or power. The water surplus is obvious on Kauai if you are trying to stay dry here. The power surplus is largely due to solar power, which the power company made obvious by rejecting new requests from homeowners to connect their solar panels to the power grid due to overloading from the existing power surplus. So why isn't California the beneficiary of the same principles? For water they have turned down all efforts to modernize the water system and collect the water falling on the state, even though it is far in excess of the needs of California, even with the incredible waste of it by well connected agricultural interests. For power, California leads the nation in both power company solar sites and individual houses with solar, but manages to be on a upward spiral of prices for it even though solar is now cheaper than most power sources. The issue is our socialist government, which generated a huge surplus under Governor brown by dramatically increasing taxes and fees for everything under the sun. You pay fees here for sitting down at a restaurant vs. take out, you pay fees to buy computer monitors, and the purpose of that fee was to dispose of the lead in computer monitors. What lead you say? Yes, Virgina, computer monitors used to have lead in them. The commuter lanes put in to ease traffic are now being sold to the highest bidder. The list goes on and on. So California had a surplus from all of those fees and taxes. Where is it? Its in bureaucratic waste, government workers retirement and salaries, etc. It doesn't matter how much money the government takes in, it will spend that and more. CA has already instituted universal heath care via "Covered California" and will institute universal basic income when it thinks they can get away with it, despite the fact that any reasonable payments of UBI will still leave people homeless.
    1
  957. 1
  958. 1
  959. 1
  960. 1
  961. 1
  962. 1
  963. 1
  964. 1
  965. 1
  966. 1
  967. 1
  968. 1
  969. 1
  970. 1
  971. 1
  972. 1
  973. 1
  974. 1
  975. 1
  976. 1
  977. 1
  978. 1
  979. 1
  980. 1
  981. 1
  982. 1
  983. 1
  984. 1
  985. 1
  986. 1
  987. 1
  988. 1
  989. 1
  990. 1
  991. 1
  992. 1
  993. 1
  994. 1
  995. 1
  996. 1
  997. 1
  998. 1
  999. 1
  1000. 1
  1001. 1
  1002. 1
  1003. 1
  1004. 1
  1005. 1
  1006. 1
  1007. 1
  1008. 1
  1009. 1
  1010. 1
  1011. 1
  1012. 1
  1013. 1
  1014. 1
  1015. 1
  1016. 1
  1017. 1
  1018. 1
  1019. 1
  1020. 1
  1021. 1
  1022. 1
  1023. 1
  1024. 1
  1025. 1
  1026. 1
  1027. 1
  1028. 1
  1029. 1
  1030. 1
  1031. 1
  1032. 1
  1033. 1
  1034. 1
  1035. 1
  1036. 1
  1037. 1
  1038. 1
  1039. 1
  1040. 1
  1041. 1
  1042. Lets see: 1. I don't like auto formatters. I have actually pulled down the source for GNU indent and modified it for site requirements, and then automated it for commits. However that was a company that had lost control of their source, with many programmers with odd styles who had left the company and left the source in very bad state. I personally am very picky about my code formatting, and the last thing I want to see is an auto formatter redo everything. 2. I love your point about finishing work. I should add that most of the time this is not a programmer issue, but a management issue. You want to get something working to show management/clients, and despite many warnings about the code needing improvement, management wants you to move on to the next project. 3. Building time into a project for documentation, debug and test: I think that needs to be accounted for. Nobody seems to build time in for debugging, even though that is %50 or more of the work. Scheduling for management with hand waving about needing extra time is a recipe for managers cutting down your schedule. One story that touched on what you said is I had a fairly intensive parser to process design files. I have done compilers before, and have a very well defined set of procedures I use to Lex the file before processing it. I handed it off to another programmer to add a feature, and he was taking a serious amount of time to do it. I didn't want to micromanage, and he was not my employee in any case, so I just made the standard inquiries about how much time it was taking. When he was done and handed over the work, he had rewritten all of the parsing front end using scanf() statements, an effort far in excess of the actual feature work required. Asked why he said "I didn't understand what you did".
    1
  1043. 1
  1044. 1
  1045. 1
  1046. 1
  1047. 1
  1048. 1
  1049. 1
  1050. 1
  1051. 1
  1052. 1
  1053. 1
  1054. 1
  1055. 1
  1056. 1
  1057. 1
  1058. 1
  1059. 1
  1060. 1
  1061. 1
  1062. 1
  1063. 1
  1064. 1
  1065. 1
  1066. 1
  1067. 1
  1068. 1
  1069. 1
  1070. 1
  1071. 1
  1072. 1
  1073. 1
  1074. 1
  1075. 1
  1076. 1
  1077. 1
  1078. 1
  1079. 1
  1080. 1
  1081. 1
  1082. 1
  1083. 1
  1084. 1
  1085. 1
  1086. 1
  1087. 1
  1088. 1
  1089. 1
  1090. 1
  1091. 1
  1092. 1
  1093. 1
  1094. 1
  1095. 1
  1096. 1
  1097. 1
  1098. 1
  1099. 1
  1100. 1
  1101. 1
  1102. 1
  1103. 1
  1104. 1
  1105. The "apartment dwellers cannot charge" thing is a canard. Most apartments have assigned spots, and I have never lived in an apartment building that didn't have covered parking and a light above it, meaning that it has electrical runs to the space. In Canada and other cold weather places, you have to have a plug at each space for the simple reason that if you don't plug you car's engine heater in at night, your car will be dead in the morning. Cars can be charged from 110v, but 220v is obviously better. Although apartment owners will moan about the costs, running 220v drops to each space isn't going to break them. I'm amazed sometimes about how really useful a 220v/30 amp L2 charger really is. I have two long range cars, a Bolt at 238 miles and a Tesla M3 at 320 miles, and typically they charge up in 4 hours on a 6.6kW L2 charger, because I don't run them all the way to zero, nor is that a good idea. Both of my cars need charging perhaps once or twice a week even with my 40 mile round trip commute. Its not even necessary to purchase a $400 charger. My Tesla comes with a 220v charger free with the car, and its a reasonable cost with the Bolt. With that and a 220v outlet, you are there for at least a 3.3kW charge. I will say that 130kW supercharging is amazing on the road (Tesla). I typically think about charging when the miles left goes to 2 digits (<100), and I see the 100kW+ charge for only about 20 minutes. But that charger takes the Tesla from less than 100 to over 200 miles in that 20 minutes, which is rocket fast compared to other cars, and makes highway travel amazing.
    1
  1106. 1
  1107. 1
  1108. 1
  1109. 1
  1110. 1
  1111. 1
  1112. 1
  1113. 1
  1114. 1
  1115. 1
  1116. 1
  1117. 1
  1118. 1
  1119. 1
  1120. 1
  1121. 1
  1122. 1
  1123. 1
  1124. 1
  1125. 1
  1126. 1
  1127. 1
  1128. 1
  1129. 1
  1130. 1
  1131. 1
  1132. 1
  1133. 1
  1134. 1
  1135. 1
  1136. 1
  1137. 1
  1138. 1
  1139. 1
  1140. 1
  1141. 1
  1142. 1
  1143. 1
  1144. 1
  1145. 1
  1146. 1
  1147. 1
  1148. 1
  1149. 1
  1150. 1
  1151. 1
  1152. 1
  1153. 1
  1154. 1
  1155. 1
  1156. 1
  1157. 1
  1158. 1
  1159. 1
  1160. 1
  1161. 1
  1162. 1
  1163. 1
  1164. 1
  1165. 1
  1166. 1
  1167. 1
  1168. 1
  1169. 1
  1170. 1
  1171. 1
  1172. 1
  1173. 1
  1174. 1
  1175. 1
  1176. 1
  1177. 1
  1178. 1
  1179. 1
  1180. 1
  1181. 1
  1182. 1
  1183. 1
  1184. 1
  1185. 1
  1186. 1
  1187. 1
  1188. 1
  1189. 1
  1190. 1
  1191. 1
  1192. 1
  1193. 1
  1194. 1
  1195. 1
  1196. 1
  1197. 1
  1198. 1
  1199. 1
  1200. 1
  1201. I put my opensource contributions in my resume at the top. Nothing. I questioned employers about it, including the ones who hired me. They said they never looked at it. Complete waste of time. I had one employer ask about my contributions to the Linux kernel, and when I said I had, he wanted me to show upstream pushes with my name on them. I told him that those contributions bore the company name I worked for, not MINE. Again, complete waste of time. So I am supposed to get the people I write code for to agree to put my name on everything. Ok. Still thinking about that, don't work on kernel drivers at the moment, so moot point. I guess the point here is if you are that much of a social climber that you run about trying to get credit for everything, consider a career in management, not programming. Another issue is that I program for FUN on my own time, on projects that are valuable to ME. Even if employers look at my code, they are not going to see an open source "I did an embedded program for an ARM bluetooth chip", IE, I don't have work examples online and it would bore me to make one. Even if I did it would be an artificial example that didn't really get implemented anywhere. I had exactly ONE example program like that, a disk drive diagnostic, an extensive one. I have programmed one of these several times, and I figured that if I did it on my own time, I could carry it from job to job instead of rewriting it every time. I put that up as a work example. I highly suspect that even if employers look at it, they go "ewwww, disk drives", and don't bother to look at it.
    1
  1202. 1
  1203. 1
  1204. 1
  1205. 1
  1206. 1
  1207. 1
  1208. 1
  1209. 1
  1210. 1
  1211. 1
  1212. 1
  1213. 1
  1214. 1
  1215. 1
  1216. 1
  1217. 1
  1218. 1
  1219. 1
  1220. 1
  1221. 1
  1222. 1
  1223. 1
  1224. 1
  1225. 1
  1226. 1
  1227. 1
  1228. 1
  1229. 1
  1230. 1
  1231. 1
  1232. 1
  1233. 1
  1234. 1
  1235. 1
  1236. 1
  1237. 1
  1238. 1
  1239. 1
  1240. 1
  1241. 1
  1242. 1
  1243. 1
  1244. 1
  1245. 1
  1246. 1
  1247. 1
  1248. 1
  1249. 1
  1250. 1
  1251. 1
  1252. 1
  1253. 1
  1254. 1
  1255. 1
  1256. 1
  1257. 1
  1258. 1
  1259. 1
  1260. 1
  1261. 1
  1262. 1
  1263. 1
  1264. 1
  1265. 1
  1266. 1
  1267. 1
  1268. 1
  1269. 1
  1270. 1
  1271. 1
  1272. 1
  1273. 1
  1274. 1
  1275. 1
  1276. 1
  1277. 1
  1278. 1
  1279. 1
  1280. 1
  1281. 1
  1282. 1
  1283. 1
  1284. 1
  1285. 1
  1286. 1
  1287. 1
  1288. 1
  1289. 1
  1290. 1
  1291. 1
  1292. 1
  1293. 1
  1294. 1
  1295. 1
  1296. 1
  1297. 1
  1298. 1
  1299. 1
  1300. 1
  1301. 1
  1302. 1
  1303. 1
  1304. 1
  1305. 1
  1306.  @boembab9056  If you scale the screen, you are letting the OS/Presentation system draw things bigger for you. If you do it in the application, the application is doing the "scaling", QED. So lets dive into that. In an ideal world, the presentation system is taking all of your calls, lines, drawings, pictures, and scaling them intelligently. In that same world, the Easter bunny is flying out of your butt. All the system can really do is interpolate pixels. Lets take a hypothetical for all of you hypothetical people. They come out with a 4m display, meaning 4 megapixels, not 4 kilopixels. You scale %100, meaning "no scaling". All of the stupid apps look like little dots on the screen because they are compressed to shit. Now we scale some 1000 times to get it all back. If the scaler does not consider what was drawn, but just pixels, its going to look terrible as scaled, just as if you blow up a photo on screen far in excess of its resolution. Now the apps that are NOT stupid, but actually drew themselves correctly, are going to look fine, perhaps just that much smoother because they took advantage of the extra resolution. Now lets go one more. I know this is boring, drink coffee, pay attention. Drawing characters at small point sizes is a problem right? People worked out all kinds of systems like "hints" to try and make fonts look good at small point sizes like 5-8 points. But you bought that 4k monitor and that 4k card, and THEN you bought a fast CPU to push all of that data around. Guess what? That 5 point problem you had is gone. Just gone. There is sufficient resolution to display fonts on screen down to the point where you can barely see them. Now ask yourself. How does a scaling algorithm do that unless it DRAWS the characters at that resolution? Keep in mind that programmers spent decades on true type formats and computed character drawing to match mathematical curves to pixels. Is an interpolated scaler going to do that? No, no it is not. Peace out.
    1
  1307.  @boembab9056  Look I know you are a smart guy, but think about what you are saying. If the application knew how to take care of its own scaling, the OS does not need to do anything, no scaling at all. The typical flow is: 1. If the application has never come up before (default), it takes the measure of the screen, then presents itself according to a rule of thumb, say 1/4 the size of the screen. 2. Size the fonts according to the onscreen DPI. Ie, it you have 12 point type, then choose an onscreen font accordingly. Points are 1/72 of an inch, so 12 point type is 0.16 of an inch in height ON SCREEN. 3. Set other dimensions accordingly. I personally use the point size to dimension everything else on screen, and I have found that works well. 4. If the application has executed previously, then just use the last window size. That is a reasonable expectation for the user. Do that, and no scaling is required. The app knows what to do. If you think about it, what scaling REALLY does is accommodate stupid applications that don't understand how to scale themselves properly. I follow all of the rules above in my applications. I'll readily admit that I had to do some work to get to 4k displays. Mostly it was because I used standard (and it turns out arbitrary) measures to size items in the apps display. Also when moving to 4k, I implemented a standard pair of keys to let the user adjust the size of the apps display (ctl-+ and ctl--, same as Chrome and most other apps). This is the correct solution. Rescaling all applications because SOME programmers don't know what they are doing is not the right solution, and, indeed, it actually punishes the applications that did the right thing by messing with their scaling instead of letting them do it themselves.
    1
  1308. 1
  1309. 1
  1310. 1
  1311. 1
  1312. 1
  1313. 1
  1314. 1
  1315. 1
  1316. 1
  1317. 1
  1318. 1
  1319. 1
  1320. 1
  1321. 1
  1322. 1
  1323. 1
  1324. 1
  1325. 1
  1326. 1
  1327. 1
  1328. 1
  1329. 1
  1330. 1
  1331. 1
  1332. 1
  1333. 1
  1334. 1
  1335. 1
  1336. 1
  1337. 1
  1338. 1
  1339. 1
  1340. 1
  1341. 1
  1342. 1
  1343. 1
  1344. 1
  1345. 1
  1346. 1
  1347. 1
  1348. 1
  1349. 1
  1350. 1
  1351. 1
  1352. 1
  1353. 1
  1354. 1
  1355. 1
  1356. 1
  1357. 1
  1358. 1
  1359. 1
  1360. 1
  1361. 1
  1362. 1
  1363. 1
  1364. 1
  1365. 1
  1366. 1
  1367. 1
  1368. 1
  1369. 1
  1370. 1
  1371. 1
  1372. 1
  1373. 1
  1374. 1
  1375. 1
  1376. 1
  1377. 1
  1378. 1
  1379. 1
  1380. 1
  1381. 1
  1382. 1
  1383. 1
  1384. 1
  1385. 1
  1386. 1
  1387. 1
  1388. 1
  1389. 1
  1390. 1
  1391. 1
  1392. 1
  1393. 1
  1394. 1
  1395. 1
  1396. 1
  1397. 1
  1398. 1
  1399. 1
  1400. 1
  1401. 1
  1402. 1
  1403. 1
  1404. 1
  1405. 1
  1406. 1
  1407. 1
  1408. 1
  1409. 1
  1410. 1
  1411. 1
  1412. 1
  1413. 1
  1414. 1
  1415. 1
  1416. 1
  1417. 1
  1418. 1
  1419. 1
  1420. 1
  1421. 1
  1422. 1
  1423. 1
  1424. 1
  1425. 1
  1426. 1
  1427. 1
  1428. 1
  1429. 1
  1430. 1
  1431. 1
  1432. 1
  1433. 1
  1434. 1
  1435. 1
  1436. 1
  1437. 1
  1438. 1
  1439. 1
  1440. 1
  1441. 1
  1442. 1
  1443. 1
  1444. 1
  1445. 1
  1446. 1
  1447. 1
  1448. 1
  1449. 1
  1450. 1
  1451. 1
  1452. 1
  1453. 1
  1454. 1
  1455. 1
  1456. 1
  1457. 1
  1458. 1
  1459. 1
  1460. 1
  1461. 1
  1462. 1
  1463. 1
  1464. 1
  1465. 1
  1466. 1
  1467. 1
  1468. 1
  1469. 1
  1470. 1
  1471. 1
  1472. 1
  1473. 1
  1474. 1
  1475. 1
  1476. 1
  1477. 1
  1478. 1
  1479. 1
  1480. 1
  1481. 1
  1482. 1
  1483. 1
  1484. 1
  1485. 1
  1486. 1
  1487. 1
  1488. 1
  1489. 1
  1490. 1
  1491. 1
  1492. 1
  1493. 1
  1494. 1
  1495. 1
  1496. 1
  1497. 1
  1498. 1
  1499. 1
  1500. 1
  1501. 1
  1502. 1
  1503. 1
  1504. 1
  1505. 1
  1506. 1
  1507. 1
  1508. 1
  1509. 1
  1510. 1
  1511. 1
  1512. 1
  1513. 1
  1514. 1
  1515. 1
  1516. 1
  1517. 1
  1518. 1
  1519. 1
  1520. 1
  1521. 1
  1522. 1
  1523. 1
  1524. 1
  1525. 1
  1526. 1
  1527. 1
  1528. 1
  1529. 1
  1530. 1
  1531. 1
  1532. 1
  1533. 1
  1534. 1
  1535. 1
  1536. 1
  1537. 1
  1538. 1
  1539. 1
  1540. 1
  1541. 1
  1542. 1
  1543. 1
  1544. 1
  1545. 1
  1546. 1
  1547. 1
  1548. 1
  1549. 1
  1550. 1
  1551. 1
  1552. 1
  1553. 1
  1554. 1
  1555. 1
  1556. 1
  1557. 1
  1558. 1
  1559. 1
  1560. 1
  1561. 1
  1562. 1
  1563. 1
  1564. 1
  1565. 1
  1566. 1
  1567. 1
  1568. 1
  1569. 1
  1570. 1
  1571. 1
  1572. 1
  1573. 1
  1574. 1
  1575. 1
  1576. 1
  1577. 1
  1578. 1
  1579. 1
  1580. 1
  1581. 1
  1582. 1
  1583. 1
  1584. 1
  1585. 1
  1586. 1
  1587. 1
  1588. 1
  1589. 1
  1590. 1
  1591. 1
  1592. 1
  1593. 1
  1594. 1
  1595. 1
  1596. 1
  1597. 1
  1598. 1
  1599. 1
  1600. 1
  1601. 1
  1602. 1
  1603. 1
  1604. 1
  1605. 1
  1606. 1
  1607. 1
  1608. 1
  1609. 1
  1610. 1
  1611. 1
  1612. 1
  1613. 1
  1614. 1
  1615. 1
  1616. 1
  1617. 1
  1618. 1
  1619. 1
  1620. 1
  1621. 1
  1622. 1
  1623. 1
  1624. 1
  1625. 1
  1626. 1
  1627. 1
  1628. 1
  1629. 1
  1630. 1
  1631. 1
  1632. 1
  1633. 1
  1634. 1
  1635. 1
  1636. 1
  1637. 1
  1638. 1
  1639. 1
  1640. 1
  1641. 1
  1642. 1
  1643. 1
  1644. 1
  1645. 1
  1646. 1
  1647. 1
  1648. 1
  1649. 1
  1650. 1
  1651. 1
  1652. 1
  1653. 1
  1654. 1
  1655. 1
  1656. 1
  1657. 1
  1658. 1
  1659. 1
  1660. 1
  1661. 1
  1662. 1
  1663. 1
  1664. 1
  1665. 1
  1666. 1
  1667. 1
  1668. 1
  1669. 1
  1670. 1
  1671. 1
  1672. 1
  1673. 1
  1674. 1
  1675. 1
  1676. 1
  1677. 1
  1678. 1
  1679. 1
  1680. 1
  1681. 1
  1682. 1
  1683. 1
  1684. 1
  1685. 1
  1686. 1
  1687. 1
  1688. 1
  1689. 1
  1690. 1
  1691. 1
  1692. 1
  1693. 1
  1694. 1
  1695. 1
  1696. 1
  1697. 1
  1698. 1
  1699. 1
  1700. 1
  1701. 1
  1702. 1
  1703. 1
  1704. 1
  1705. 1
  1706. 1
  1707. 1
  1708. 1
  1709. 1
  1710. 1
  1711. 1
  1712. 1
  1713. 1
  1714. 1
  1715. 1
  1716. 1
  1717. 1
  1718. 1
  1719. 1
  1720. 1
  1721. 1
  1722. 1
  1723. 1
  1724. 1
  1725. 1
  1726. 1
  1727. 1
  1728. 1
  1729. 1
  1730. 1
  1731. 1
  1732. 1
  1733. 1
  1734. 1
  1735. 1
  1736. 1
  1737. 1
  1738. 1
  1739. 1
  1740. 1
  1741. 1
  1742. 1
  1743. 1
  1744. 1
  1745. 1
  1746. 1
  1747. 1
  1748. 1
  1749. 1
  1750. 1
  1751. 1
  1752. 1
  1753. 1
  1754. 1
  1755. 1
  1756. 1
  1757. 1
  1758. 1
  1759. 1
  1760. 1
  1761. 1
  1762. 1
  1763. 1
  1764. 1
  1765. 1
  1766. 1
  1767. 1
  1768. 1
  1769. 1
  1770. 1
  1771. 1
  1772. 1
  1773. 1
  1774. 1
  1775. 1
  1776. 1
  1777. 1
  1778. 1
  1779. 1
  1780. 1
  1781. 1
  1782. 1
  1783. 1
  1784. 1
  1785. 1
  1786. 1
  1787. 1
  1788. 1
  1789. 1
  1790. 1
  1791. 1
  1792. 1
  1793. 1
  1794. Add: 1. Reverse mortgages. 2. Medicare advantage plans. 3. Home title protection. 4. Ink jet printers (most of the cost is cartridges, and they self destruct if you don't use them regularly). I got a lot out of high school, but it was because I took three hours of vocational classes daily, electronics, automotive, metal shop and print shop. I did that because I knew well that I was never going to be able to swing college and because the other classes like math and English bored me. I was warned several times that I would not graduate, and indeed I didn't, but got a GED later after work at an electronics job.The math requirement was covered by a test -- turns out I didn't need a class to be good at it, and the English class I took at night school, mostly because I ditched English class since it put me to sleep. True story, I kinda liked math but kept flunking it, so I would take it over again. One day I decided that I wasn't really bad at math and could pass the test if I actually studied for a change. I got an A+ on that test and was kicked out of the class for cheating :-). I don't miss high school. Ok, final joke. I married an English teacher. Ya, funny I know. Thus I almost spent more time at high school helping her with her teaching than I spent actually GOING to high school, which I ditched a lot. This was during the 2000s. I noticed that the high schools were selling off shop equipment and scaling back their vocational schools. Metal shop and auto shop are dirty jobs don'tcha know. Is it any wonder the guy working on your car is from Vietnam?
    1
  1795. 1
  1796. 1
  1797. 1
  1798. 1
  1799. 1
  1800. 1
  1801. 1
  1802. 1
  1803. 1
  1804. 1
  1805. 1
  1806. 1
  1807. 1
  1808. 1
  1809. 1
  1810. 1
  1811. 1
  1812. 1
  1813. 1
  1814. 1
  1815. 1
  1816. 1
  1817. 1
  1818. 1
  1819. 1
  1820. 1
  1821. 1
  1822. 1
  1823. 1
  1824. 1
  1825. 1
  1826. 1
  1827. 1
  1828. 1
  1829. 1
  1830. 1
  1831. 1
  1832. 1
  1833. 1
  1834. 1
  1835. 1
  1836. 1
  1837. 1
  1838. 1
  1839. 1
  1840. 1
  1841. 1
  1842. 1
  1843. 1
  1844. 1
  1845. 1
  1846. 1
  1847. 1
  1848. 1
  1849. 1
  1850. 1
  1851. 1
  1852. 1
  1853. 1
  1854. 1
  1855. 1
  1856. 1
  1857. 1
  1858. 1
  1859. 1
  1860. 1
  1861. 1
  1862. 1
  1863. 1
  1864.  @jonassattler4489  Yea, its an unfortunate fact that I am aware of. I am a pilot as well as a programmer. The use of C is a shockingly bad choice for life critical applications such as avionics. I disagree there are no alternatives. Java, which has been around for decades, is a fully protected language, and Pascal has been around for 50 years now. Fully protected. Allocators are well debugged. The bugs that occur happen not because of the allocator (which is usually only a page or so of code) but because of incorrect use. Regardless, again, there are languages outside of that running segment violation of a language C that properly check allocations. I use C in most of my work, have to make a living. You code in what your employer uses. But again, C is a terrible choice, and yes, there are alternatives. I have lots of issues with NASA use of COTS (Customer off the shelf). When the mars probe locked up because of a priority inversion, my first reaction was "they use a priority based RTOS???". Priority based OSes have known issues that (to me) stem from use of an oversimplified model of tasking. Demand based systems (well covered in the literature) are better and actually properly model the way tasking works. I'll put is succinctly: NASA chose their languages by popularity, not by fitness of purpose. The military went through the same thing, and they chose ADA because it had protection (long before JAVA and the protected language fad). Its simple. The military didn't want run nuclear missiles on C code. They kinda got out their on their own limb with ADA, but they made it work. ADA is still in use.
    1
  1865. 1
  1866. 1
  1867. 1
  1868. 1
  1869. 1
  1870. 1
  1871. 1
  1872. 1
  1873. 1
  1874. 1
  1875. 1
  1876. 1
  1877. 1
  1878. The war over color TV standards got repeated with the advent of HDTV. The FCC had already signed off on an analog, backwards compatible system when a small silicon company called general instrument showed that by using a digital carrier based system with mpeg, the amount of bandwidth needed by the (very wasteful) analog TV system could be reduced significantly, while at the same time dramatically increasing the reception reliability. GI had already done this for digital cable systems, so over the air systems had fallen behind. The FCC did another about face, and the broadcasters suddenly did as well. Cynics said that the true underlying cause was the broadcasters realization that the very same digital technology that could give an HDTV signal in the same 6mhz channel as analog TV could very well be used to compress existing TV into 1mhz or less, and result in broadcasters losing up to 5/6ths of their very valuable spectrum real estate if the FCC (and the public) woke up to this fact. Thus HDTV was born, and the broadcasters used the technology to split up into multiple channels anyway... but under their control. The true result of all of the nonsense is that mpeg-2, and later mpeg-4, took over TV broadcasting by storm, rendering the actual method used to broadcast TV increasingly irrelevant. The broadcasters kept their spectrum allocations, but the number of over the air users decreases daily. And the FCC increasingly puts pressure on broadcasters to give up that real estate to other uses.
    1
  1879. 1
  1880. 1
  1881. 1
  1882. 1
  1883. 1
  1884. 1
  1885. 1
  1886. 1
  1887. 1