Youtube comments of Scott Franco (@scottfranco1962).
-
1200
-
832
-
777
-
501
-
496
-
451
-
361
-
330
-
I own both of these cars, a 2017 Bolt and a 2018 Tesla M3. My wife drives the Bolt, I drive the M3. We both had EVs previously, a Leaf and a Spark. What changed principally from day to day is that we don't both rush for the charger every night. My wife likes the hatchback, the fact that the Bolt is slightly smaller, and she appreciates the fact she does not have to worry about range to get anywhere around the city. She rarely if ever charges away from home.
I have a long commute, 44 miles round trip, charge about every 3 days, and use chargers away from home only when we take long trips.
The biggest difference between the cars is charging away from home. We went from San Fransisco to Los Angeles in both cars. In the Bolt, we were restricted to highway 101, a longer route, because the shortest and most heavily traveled route has no CCS chargers. At all. According to plugshare, it still does not. Trying to charge down the 101 is somewhat hit or miss. There "appear" to be a lot of high power chargers for CCS, but most of them are actually 1/2 power chargepoint 25kW stations that take way longer to charge. On the way down we made it with one charge, but we left fully charged, stopped for 2 hours (!) at a 50kW charger, and barely had any charge when we made it to San Fernando Valley. On the way back we hit three chargers, made worse because the hotel we stayed at had no charger, so we had to hit one on the way out. Note: if you can find a hotel with a charge spot, even only an L2, this is a huge help, since you leave the hotel fully charged.
With Tesla we left fully charged, hit the Harris ranch charger even though we could have gone farther, and charged twice in LA before returning on the I5. Another charge at Harris ranch got us home.
There really is no comparison between cars when it comes to on the road charging. With CCS charging you are luck if there are two 50kW charging spots, and the chances are good one of them is out of order. When you get there you fiddle with a card, and even having a card does not always help. My EVgo card works only %50 or less of the time, and they have sent me several replacement cards. Thus I waste 10 minutes at each charge calling them and setting up a charge on the account.
With Tesla, we go to chargers and there are 10 spots, with some up to 20 (!) spots. You plug the car in and go. The billing is completely automatic, and the prices are reasonable. There is an odd thing going on with the A/B system, if you plug in to A, and another car is on B, or vice versa, you both get slowed down, so you pick an A-B pair that is unoccupied. Coming into a charge station at about 50 miles left (my personal minimum), you get to see the car charge at over 100kW/h for about 2 minutes to reach over 200 miles left, then less after that. Its truly a breathtaking sight to see a car charge that fast, and apparently this is unique to the model 3, which has improvements in charging speed even over the Model S.
In short, we are happy with both cars, but the Bolt is clearly a local, city car, and the M3 is for long trips.
306
-
243
-
231
-
214
-
210
-
My favorite application for mems is in aviation, since I am a recreational pilot. One of the first and most successful application of mems was accelerometers, which don't need openings in the package to work. Accelerometers can replace gyroscopes as well as enable inertial navigation, since they can be made to sense rotation as well as movement. With the advent of mems, avionics makers looked forward to replacing expensive and maintenance intensive mechanical gyroscopes with mems. A huge incentive was reliability: a gyroscope that fails can bring down an aircraft. The problem was accuracy. Mems accelerometers displayed drift that was worse than the best mechanical gyros. Previous inertial navigation systems used expensive laser gyros that worked by sending light pulses through a spool of fibre optical line and measuring the delay due to rotation.
Mems accelerometers didn't get much better, but they are sweeping all of the old mechanical systems into the trash can. So how did this problem get solved? Well, the original technology for GPS satellite location was rather slow, taking up to a minute to form a "fix". But with more powerful CPUs it got much faster. But GPS cannot replace gyros, no matter how fast it can calculate. But the faster calculation enabled something incredible: the GPS calculation could be used to calibrate the mems accelerometers. By carefully calculating the math, a combined GPS/multiaxis accelerometer package can accurately and reliably find a real time position and orientation in space. You can think of it this way: GPS provides position over long periods of time,, but very accurately, and mems accelerometers provide position and orientation over short periods of time, but not so accurately. Together they achieve what neither technology can do on its own.
The result has been a revolution in avionics. Now even small aircraft can have highly advanced "glass" panels, that give moving maps, a depiction of the aircraft attitude, and even a synthetic view of of the world outside the aircraft in conjunction with terrain data. It can even tell exactly which way the wind is blowing on the aircraft because this information falls out of the GPS/accelerometer calculation.
197
-
157
-
142
-
141
-
I think I told this story here before, but it bears repeating. In the early 80's windowed UV erasable proms were a thing. It was the time of Japan bashing, and accusals of their "dumping" on the market. We used a lot of EPROMs from different sources, and Toshiba was up and coming. We used mainly Intel EPROMs at the time. The state of the art back then was 4kb moving to 8kb (I know, quaint). Because of the window in the top of the EPROM you could see the chip. Most of us used this feature, when we had a bad EPROM, to get a little light show by plugging in the EPROM upside down and sealing the fate of the chip.
Anyways, Intel and Toshiba were in a price war, so the chips from each vendor were about equivalent in price. But side by side in the UV eraser tray what you saw was shocking. The Toshiba chips were about 1/4 the size of the Intel chips. Yes, those "inferior" Japanese were kicking our a**es. Intel struggled along for a while, and exited the market for EPROMs. The "anti-dumping" thing had exactly one result. We could go to Japan, to the akihabara market (from street vendors!) and get chips with twice or four times the capacity of USA chips for cheap and bring them back in our luggage.
130
-
122
-
120
-
111
-
103
-
101
-
100
-
I have taken some heat before when joining projects that were in bad shape and the prevailing opinion was that they wanted to start over. I say, "you'll just make the same mistakes over again". I read a great story about a professor who took a position in a college for the electrical engineering section. The lab for it was in terrible shape, the instruments were all broken. The administrator asked the new professor to make a list of needed equipment and he would see if he could find the money for it.
The new professor replied "no, not a problem. We will use what we have". The administrator left, stunned. The new professor started his classes and took the new students out to the lab. Over a course of months, they took apart the broken equipment, got schematics for them, and went over what was wrong with each instrument as a group project. Slowly but surely, they got most of it working again. The students that did this because some of the best engineers the school had seen.
The moral of the story is applicable to software rewrites. The team that abandons the software and starts over does not learn anything from the existing software, even if they didn't write it. They create a new big mess to replace the old big mess. Contrast that with a team that is forced to refactor the code. They learn the mistakes of the code, how to fix it, and, perhaps more importantly of all, become experts at refactoring code.
In the last 2 years, I instituted a goal for myself that I would track down even "insignificant" problems in my code, and go after the hardest problems first. In that time I have been amazed at how often a "trivial" problem turned out to illustrate a deep and serious error in the code. Similarly, I have been amazed at how solving hard problems first makes the rest of the code go that much easier.
I have always been a fan of continuous integration without calling it that. I simply always suspected that the longer it took to remerge a branch in the code, the longer it would take to reintegrate it, vs. small changes and improvements taking a day or so. I can't take credit for this realization. Too many times I have been assigned to merge projects that were complete messes because of the long span of branch development. As the old saw goes, the better you perform such tasks the more of it you will get, especially if others show no competence in it.
93
-
81
-
74
-
73
-
72
-
70
-
63
-
60
-
Don't try that in California. Do it once, people think you have an interview in sales, or are weird. Do it consistently and they start to avoid you.
No seriously, I have always dressed in Silicon Valley style, jeans and questionably clean tee-shirts. My wife got on my case about not dressing nice, and I should do better. Finally I just went whole hog and wore a suit all days except for casual Fridays. People thought it was hilarious, but after a month the amusment when down. After that I just got an occasional "you in sales" comment. Honestly aside from getting me better looks from women (which I didn't really need, being married) and making me the odd man out (ok, I have always been the odd man out), I didn't really see the difference. The suit thing stopped with the marriage, oh well.
55
-
54
-
52
-
51
-
50
-
49
-
49
-
47
-
46
-
41
-
"the braindead nature of the x86 CPUs makes that difficult to do otherwise" THANK YOU. I have had endless debates with people who claim that including drivers in the kernel is the only way to go. In fact, Intel designed the x86 memory model to support three rings, from 0 to 3, with the drivers living outside the kernel. The problem is (as usual for Intel) they botched it. They tried to build tasking into the hardware, and the result was a bug ridden low performance mess that nobody uses. In fact, if you go back and read how Multics was designed, you will realize that Intel copied Multics design into their 80286 processors (yes, that Multics, the overcomplex system that Unix was created as a rebellion against). I explain the origin of 80286 protection model as being like the episode of Startrek where they go to a planet that is structured on the gangs of old Chicago because some previous visiting ship messed up and left a book on Chicago history behind, and they based their whole civilization on it. If you read up on the x86 protection/memory model, which all stems from the 80286, you realize that the designers read a book on Multics and that was it. They imitated the design. As one German Zeppelin engineer stated when getting a look at the R-101, which crashed and burned in France killing almost all abord: "you have imitated our design completely, including our mistakes".
PS. before you mark me as an Intel basher, I am an ex-Intel employee.
39
-
39
-
37
-
36
-
36
-
35
-
33
-
32
-
29
-
@nicksrub I'm 65, I have been an engineer since 19, technician before that, and repaired TVs in my parents garage for money when I was 15. I guess what other folks think of as career markers just strike me as a phase.
The story above was 2009 if I recall, just after the "great recession" of 2008. I didn't escape that, but not for the obvious reasons. My wife of 13 years decided that was the time to get a divorce. About the only good thing that resulted from that is that our house's value was depressed, and so I could buy her out. Things have changed since then. During the time I described, I got very good at making spreadsheets, and what I found in those days was that I was going to lose the house shortly. I rented out two rooms for a while and got past that. I think by 2010-2011 things took off again, and they really haven't slowed down since then.
I did full time up to 62 years of age, and preferred that. After 60, I got lots of interviews, but no interest. Since then it appears more companies are interested in having me as a contractor. I can't say if it is because of my age, or the times we live in. I can't say it is not possible to get full time now, when there seems to be a lot of demand. It just seems far easier to get contracts, and those have been good. My last contract was Apple. My current one is Google. Not exactly poor companies :-)
28
-
28
-
28
-
27
-
26
-
26
-
25
-
25
-
25
-
25
-
24
-
24
-
24
-
24
-
24
-
23
-
23
-
23
-
23
-
22
-
Great video on one of my favorite subjects. I'd like to add a couple things. First of all (as the poster below said), this history skips a very important branch of IC history, the gate array, which FPGAs (which are a namesake, the Field Programmable Gate Array). Basically gate arrays were ICs that consisted of a matrix of transistors (often termed gates) without the interconnect layers. Since transistors then, and largely even today, are patterned into the silicon wafer itself, this divided the wafer processing into two separate divisions, the wafer patterning, and the deposition of aluminum (interconnect). In short, a customer could save quite a bit of money by just paying for the extra masks needed to deposit interconnects, and take stock wafers to make an intermediate type of chip between full custom and discrete electronics. It was far less expensive than full custom, but of course that was like saying that Kathmandu is not as high as Everest. Xilinx used to have ads showing a huge bundle of bills with the caption "does this remind you of gate array design? Perhaps if the bills were on fire".
Altera came along and disrupted the PLA/PAL market and knocked over the king o' them all the 22V10, which could be said to be the 7400 of the PAL market. They owned the medium scale programmable market for a few years until Xilinx came along. Eventually Altera fought back, but by then it was too late. However, Altera got the last word. The EDA software for both Xilinx and Altera began to resemble those "bills o' fire" from the original Xilinx ads, and Altera completely reversed its previous stance to small developers (which could be described as "if you ain't big, go hump a pig") and started giving away their EDA software. Xilinx had no choice but to follow suit, and the market opened up with a bang.
There have been many alternate technologies to the RAM cell tech used by Xilinx, each with an idea towards permanently or semipermanently programming the CLB cells so that an external loading prom was not required. Some are still around, but what was being replaced by all that work and new tech was serial EEPROM that was about 8 pins and approximately the cost of ant spit, so they never really knocked Xilinx off its tuffet. My favorite story about that was one maker here in the valley who was pushing "laser reprogrammability", where openings in the passivation of a sea of gates chip allowed a laser to burn interlinks and thus program the chip. It was liternally PGA, dropping the F for field. It came with lots of fanfare, and left with virtual silence. I later met a guy who worked there and asked him "what happened to the laser programmable IC tech?". He answered in one word: contamination. Vaporising aluminum and throwing the result outwards is not healthy for a chip.
After the first couple of revs of FPGA technology, the things started to get big enough that you could "float" (my term) major cells onto them, culminating with an actual (gasp) CPU. This changed everything. Now you could put most or all of the required circuitry on a single FPGA and the CPU to run the thing as well. This meant that software hackers (like myself) could get into the FPGA game. The only difference now is that even a fairly large scale 32 bit processor can be tucked into the corner of one.
In the olden days, when you wanted to simulate hardware for an upcoming ASIC, you employed a server farm running 24/7 hardware simulations, or even a special hardware simulation accellerator. Then somebody figured out that you could lash a "sea of FPGAs" together and load a big 'ole giant netlist into it and get the equivalent of a hardware simulation, but near the final speed of the ASIC. DINI and friends were born, large FPGA array boards that cost a couple of automobiles to buy. At this point Xilinx got wise to the game, I am sure. They were selling HUGE $1000 per chip FPGAs that could not have a real end consumer use.
21
-
21
-
21
-
20
-
18
-
18
-
18
-
17
-
17
-
17
-
The issue with rockets to space vs. flying to space is that it takes more net power to fly to space than to rocket to it. Flying to space leaves the craft taking more time in the atmosphere and inducing drag, which takes power to overcome. A ballistic rocket spends less time fighting the atmosphere. The factor that could change that is using air breathing engines to the highest point possible. Since part of an air breathing engines power comes from the oxygen in the atmosphere, that can replace carried fuel, ie., oxygen. Additionally, the air can be quite thin. A craft flying very fast scan scoop a lot of oxygen from a very thin atmosphere. Hence, the interest in ramjets.
However, without advanced jets, and perhaps even with them, it still does not make that much sense to fly to orbit. It takes more time, exposes the aircraft to heating effects longer, etc. So the question is, as it was, why bother with it when we know so much about getting to space directly, with rockets. Most of the "flying space vehicles", such as the shuttle, don't use the wings to go to space, but rather to be maneuverable once it returns to atmosphere. Ballistic entry vehicles have no guidance at all (Apollo). Elon has proposed an intermediate solution, a rocket that uses fins designed to direct its path on return, sort of a cross between the two.
16
-
16
-
16
-
15
-
15
-
15
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
12
-
12
-
12
-
12
-
12
-
@bunyu6237 I think others would supply better info than I on this subject, since I haven't been in the IC industry for decades, back in the late 1980's. At that time, the industry (reverse engineering) was moving from hand reversing to fully automated reversing. However, if you don't mind speculation, I would say there is no concrete reason why the reversing industry would not have kept up with newer geometries. The only real change would have been that its basically not possible to manually reverse these chips anymore. I personally worked on reversing a chip at about 4 generations beyond the Z80, which was not that much. At that time, blowing up a chip to the size of a ping-pong table was enough to allow you to see and reverse engineer individual transistors and connections.
Having said that, I have very mixed feelings about the entire process. I don't feel it is right to go about copying others designs. I was told at the time that the purpose was to ensure compatibility, but the company later changed their story.
On the plus side, it was an amazing way for me to get onboard the IC industry. There is nothing like reverse engineering a chip to give you a deep understanding of it.
However, I would say I think I would refuse to do it today, or at least try to steer towards another job.
For anyone who cares about why I have a relationship to any of this, I used to try and stay with equal parts of software and hardware. This was always a difficult proposition, and it became easier and more rewarding financially to stay on the software side only, which is that I do today. However, my brush with the IC industry made a huge impression on me, and still shapes a lot of what I do. For example, a lot of my work deals with SOCs, and I am part of a subset of software developers who understand SOC software design.
12
-
12
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
The definition of "basic english" is: a list of words that, if they can't directly express an idea, they can be strung together to express an idea, like "water thing to sit in to clean self" for "bath", etc. This is covered well in Bodmer's "The loom of language", the hypothesis of which is that if you are going to learn a language, you might as well learn the languages closely related to it at the same time, like the Romance languages (French, Italian, Spanish, etc).
8
-
8
-
8
-
Just one small addition: When Intel pushed onboard graphics, where the graphics memory was part of the main memory of the CPU, it was thought that the video solution would actually be faster, since the CPU would have direct access to the frame buffer, as well as having all of the resources there to access it (cache, DMA, memory management, etc). The reason they lost that advantage in the long run was the dual advantages of VRAM or dual ported video ram, a ram that could both be read and written by the CPU at the same time as being serially read out to scan the video raster device, as well as the rise of the GPU, meaning that most of the low level video memory access was handled by a GPU on the video card that did the grunt work of drawing bits to the video ram. Thus Intel ran instead down the onboard video rabbit hole. Not only didn't they win the speed race with external video cards, but people began to notice that the onboard video solutions were sucking considerable CPU resources away from compute tasks. Thus the writing was on the wall. Later, gamers only knew the onboard video as that thing they had to flip a motherboard switch to disable when putting a graphics card in, and nowadays not even that. Its automatic.
8
-
8
-
8
-
8
-
8
-
8
-
8
-
Sounds like Europe. %30 employment does not happen overnight. So lets take a real example. Mcdonalds goes full auto. I mean you order from a kiosk, the food is made entirely by machine, delivered to a slot in front. Their not far from that now, I would argue that Mcdonalds would end itself by doing that, you don't want to narrow the difference between you and a vending machine to zero. But lets say for sake of argument.
Now a very small slice of the population makes a career of Mcdonalds. Some do, and I honestly love those guys, Miky'ds makes them wear ties, cause they are the store managers. They are freeking awesome. They are the ones who are going to run the world someday.
No, most are kids using Mcdonalds to pay for school, and they will be out of there soon. So as valuable to society as burger flipping is, or nowadays pressing buttons on the burger flipping machine, I suspect there are more valuable careers. So they are studying for the medical profession, or insurance actuary, whatever.
The way you get to %30 unemployment is to give people infinite unemployment benefits, more than they every put into the system by working. That's how Europe does it, and that's how we did it. Remember the 99's? That was the program that dramatically increased the unemployment benefits length. We ended that program, and everyone in Washington predicted disaster. So what actually happened? Turns out employment shot up. Right after they canceled the program. Now we have this "universal basic income" jazz, which means we take money from people who work (me) and give it to people who want to smoke pot all day. IE., we want to be like Europe now.
Now before you dismiss me as a snot nose rightist, I lived through the 2008 crash AND a divorce AND with two kids to support AND a house to keep out of foreclosure. I had a 6 month unemployment time at the bottom of the recession. Nobody could get a job then, even in my tech fields. I used to keep spreadsheets on my finances every week, and I ran red ink for years.
Whats the solution? Again, stop whining and go to work. You don't have to be the smartest worker or the hardest worker. You just have to be smarter than the other guys, and work harder than the other guys.
7
-
7
-
7
-
7
-
@SuperWhisk Its a big subject (age). It would explain why companies take efforts to figure out your age even if you are not allowed to list it. I used to have companies that would ask me for my date of birth while quickly adding "its just for identification purposes!". Perhaps this might shock you, but I think I don't blame them. If you are in or near retirement age, the company has to assume you are looking at the calendar and wondering if your keeping working is really worthwhile. Does that apply to me? I don't think so. If I were in retirement and someone gave me a remote contract with reasonable hours I would take it, even if I had to come to the company part of the time. And this arrangement seems popular. But then I like to work, and like to get out of the house on occasion. My wife of 10 years feels the same way. Is age discrimination unfair? Wellll... yes, but discrimination by skin color or similar reasons is a lot less fair. I would say trying to categorize everyone is really the issue.
A short story (yea, again sorry). I worked at a place where another employee was clearly older, and likely retirement age. His boss was a friend of mine, and he ended up terminating him. I asked him about it, and he said he gave him several chances to improve his productivity, but without result.
It seems to be all relative anyways. At 65, I found that unless I take a nap after lunch I can't function. I sit at my terminal and fog out. a 20 minute nap fixes that. I had one boss that had a real problem with that. At my current contract (Google), they actually have rooms to take naps in (god I love this place).
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
My favorite banana story: when my new Colombian wife came to the USA, I rapidly learned that the word, kinda, for bananas was "platinos". When my beautiful wife asked for platinos, I pointed to the pile of fresh, yellow bananas at the local store. "Esta no es platinos" (not bananas) was the reply. We repeated this a couple of times. Finally, we went to the "Tienda Mexicana" (Mexican supermarket). We came across a table where there was a pile of bananas, but clearly past their prime. Brown and rotting. Yeech. My wife, seeing this, clapped her hands together and exclaimed "platinos!". Turns out to latin America, yellow bananas are unripe, and brown ones are where the fruit has achieved the maximum sugar content!
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
@divyanshbhutra5071 And I go to LA from San Jose, 500 miles, Reno, and portland. LA for example is a one charge stop if I leave with a full charge. Its a 5 hour trip, I stop for lunch, the car charges in less than 30 minutes, shorter than my lunch time. its about 2 1/2 hours per leg (to and from the charger). I would not want to sit in the car for 5 hours in any case. Thus, extra time waiting for the charger: 0 (zero). In fact I probably spent less time "fueling" up the car than you do, because you go, pump or have gas pumped, go to the counter, pay, then go to the restaurant and have a meal. I drive to the charger, plug in, and forget about it.
This thing has been played up in the anti-EV press wayyyy to much. In actual use, no problem.
6
-
6
-
Its a good documentary. It also feels older than its date here on youtube, a lot of emphasis on the model S, not much model 3. I like the fact that they actually found and interviewed an ex-Tesla employee. American TV has been really behind the curve in showing the real face of Tesla.
The german makers are stepping up to the plate with mid-price cars with real range and charge times like the e-tron. However, they have a lot of ground to make up and tesla is moving ahead again, with lower prices and a new 250kW charger vs. the 150kW/350kW chargers of the CCS family. The issue there is that Tesla is far, far ahead in charger numbers. The vast majority of CCS charging stations are 50kW or even less (its hard to tell since the best web site, plugshare, will not filter by charger wattage). Eventually, the American makers along with Europe will wake up and push chargers, but this is not really their priority right now, since the number of cars capable of 150kW or better charging is minimal.
The lag was the german makers has been impressive. BMWs car series has been expensive cars with the same range, < 100 miles, as the low end EVs with no competition for Tesla whatever. The Porsche Taycan is competition for the Model S, a high end one at that, a car that is already going obsolete. Good luck finding a 350kW charger for this science project. The e-tron is more direct competition for what is becoming the main car of Tesla, the model 3, but again, is far behind on the charging curve. VW shows the most promise, but so far that is all we have seen from them, promises promises.
In short, the german makers are talking now about waking up and meeting the competition from Tesla. If that is true, then the german makers need to put down the coffee and get on the train to work. Time is ticking away.
One thing I think gets lost in the "car makers vs. Tesla" story is that EV makers around the world are not competing against american car makers, who are still (by comparison) fast asleep (in the back of a pickup truck). They are competing against silicon valley, and silicon valley companies MOVE. Like rapidly.
6
-
6
-
6
-
6
-
6
-
"reverse RPN" is not Reverse Reverse PN, it is just PN. Polish notation is like + 1 2 or add 1 and 2, RPN is 1 2 +. Polish notation was specifically prefix notation, which was considered more natural than 1+2 because you can write +1 * 2 3 to mean 1+(2*3), and indeed, this is a common notation used in compiler intermediate codes. However, RPN is considered even more natural than this because 2 3* 1+ exactly expresses operands and the operators that use them in proper order. Thus get 2, get 3, multiply, then get 1, and then add to the result of 2*3. It also is stack machine friendly.
If there never were such a thing as polish notation, I'd agree that reverse RPN makes sense, but polish notation was a thing before RPN.
6
-
6
-
5
-
5
-
5
-
5
-
5
-
What people miss about the RISC revolution is that in the 1980s with Intel's 8086 and similar products, the increasingly complex CPUs of the day were using a technique called "microcoding", or a lower level instruction set inside the CPU to run instruction decoding, etc. It was assumed that the technique, inherited from mini and mainframe computers, would be the standard going forward, since companies like intel were increasing the number of instructions at a clip. RISC introduced the idea that if the instruction set were simplified, CPU designers could return to pure hardware designs, no microcode, and use that to retire most or all instructions in a single clock cycle. In short, what happened is the titanic turned on a dime: Intel dropped microcode like a hot rock and created pure hardware CPUs to show that any problem could be solved by throwing enough engineers at it. They did it by translating the CISC x86 instructions to an internal RISC form and deeply parallelizing the instruction flow, the so called "superscalar" revolution. In so doing they gave x86 new life for decades.
I worked for SUN for a short time in the CPU division when they were going all in on multicore. The company was already almost in freefall. The Sparc design was flawed and the designers knew it. CEO Johnathan faced questioning at company meetings when he showed charts with Java "sales" presented as if it were a profit center (instead of given away for free). I came back to SUN again on contract after the Oracle merger. They had the same offices and the little Java mascots on their desks. It was probably telling that after my manager invited me to apply for a permanent position, I couldn't get it though their online hiring system, which was incredibly buggy, and then they went into a hiring freeze so it was irrelevant.
I should also mention that not all companies did chip design in that era with SUN workstations. At Zilog we used racks full of MicroVaxes and Xwindow graphics terminals. I still have fond memories of laying out CPUs and chainsmoking in the late 1980s until midnight.
5
-
@allentchang Hummm.... back in 1987 it was LSI workstations, if I recall. I don't know the operating system, but I believe they were Tectronix graphics terminals (Zilog). They were not fast, but very high resolution for the day. In 1993 it was Apollo workstations (Seagate), which were running Mentor. It certainly ran a Unix variant, but it was an unusual one. Into the new century, its all been Verilog using Xilinx software (various startups), running on Windows (does Xilinx even run on Linux/Unix?). Our fabs also tell the story: last century it was custom fab (Zilog), then AT&T fab (Seagate), then after that probably TSMC, I don't recall.
Afternote: Actually I do recall, at Zilog the Tek terminals were driven by racks and racks of LSI-11's, a PDP-11 that fit in a single or double RU. I remember because we had a big serial port mux that would allow you to get a connection to any of the machines. I used to write scripts that would start jobs on multiple machines overnight, which was the only way to get reasonable simulations of chips. I believe they were running Unix. Our chip simulations were done on a custom gate level simulator that I learned a lot from, since it would simulate things like domino logic.
And yes, I am old.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I worked on CCTV masts for a while. I was replacing a CCTV installation inside a large aircraft hangar, about 3 stories up, and I realized I had placed the lift, an extensible ladder, about 3 feet off center, which was apparent only after climbing up it. I went ahead and proceeded to unscrew the mount on the very heavy camera, leaning out to capture it. I realized, too late, that I had misjudged the length of the mounting screw and the camera was coming down before I was ready. I made a decision in that moment, and stood back and let the camera fall rather than try to catch it.
I returned to the CCTV company, placed the broken camera on the bench and walked out of that job then and there. My life was worth more than that messed up job.
5
-
5
-
5
-
@D R That's a great question. In fact we could just use the altitude from the GPS system. Right now you dial in the barometric "base pressure", or pressure at sea level. This is used to calibrate the altimeter so that it delivers accurate results, I believe it is within 100 feet of accuracy (other sources say the FAA allows 75 feet of accuracy). Its a big big deal. A few feet could mean if you hit a building or pass over it. Thus when you fly, you are always getting pressure updates from the controller, because you are going to need updates that are as close as possible to the pressure in your area.
So why not use the GPS altitude, which is more accurate?
1. Not everyone has a GPS.
2. Even fewer have built in GPS (in the panel of the aircraft).
3. A large number of aircraft don't re calibrate their altimeters at all.
4. New vs. old. Aircraft have been around for a long time. GPS not so much.
If you know a bit about aircraft, you also know that number 3 there lobbed a nuclear bomb into this conversation. Don't worry, we will get there. First, there is GPS and there is GPS, implied by 1 and 2. Most GPS units in use are portable (in light aircraft). Long ago the FAA mandated a system based on transponders called "mode-C" that couples a barometric altimeter into the transponder. OK, now we are going into the twistly road bits. That altimeter is NOT COMPENSATED FOR BASE PRESSURE. In general the pilot does not read it, the controller does (ok most modern transponders do read it out, mine does, but an uncompensated altitude is basically useless to the pilot). The controller (generally) knows where you are, and thus knows what the compensating pressure is (no, he/she does not do the math, the system does it for them).
Note that GPS had nothing to do with that mode C discussion. So for the first part of this, for a GPS to be used for altitude, the pilot would have to go back to constantly reporting his/her altitude to the controller. UNLESS!
You could have a mode S transponder, or a more modern UAT transceiver. Then, your onboard GPS automatically transmits the altitude, and the position, and the speed and direction of the aircraft.
Now we are into equipage. Note "onboard GPS". That means built into the aircraft. Most GPS on light aircraft are handheld, which are a fraction of the cost of built in avionics. Please lets not get into why that is, its about approved combinations of equipment in aircraft, calibration, and other issues. The mere mention of it can cause fistfights in certain circles.
Ok, now lets get into number 3. If you are flying over, say, 14,000 feet, its safe to say you are not in danger of hitting any mountains, or buildings, or towers. Just other aircraft. So you don't care about pressure compensation. So the rules provide that if you are over 18,000 feet, you reach down and dial the "standard pressure" of 29.92 inches of mercury, which the FAA has decreed is "standard pressure" (the FAA also has things like standard temperature, standard tree sizes, etc. fun outfit). So what does that mean? Say you are TWA flight 1, and TWA flight 2 is headed the opposite direction, same altitude. Both read 18,000 feet. Are they really at 18,000 feet? No, but it doesn't matter. If they are going to collide, they are in the same area, and thus the same pressure. Meaning that their errors cancel. It doesn't matter that they are really at 19,123 feet, they both read the same. Thus climbing to 19,000 (by the altimeter) means they will be separated by 1,000 feet.
So the short answer is the final one. The barometric system is pretty much woven into the present way aircraft work. It may change, but it is going to take a long time. Like not in my lifetime.
5
-
@DumbledoreMcCracken I'd love to make videos, but I have so little free time. Allen was a true character. The company (Seagate) was quite big when I joined, but Allen still found the time to meet with small groups of us. There were a lot of stories circulating... that Allen had a meeting and fired anyone who showed up late because he was tired of nobody taking the meeting times seriously, stuff like that. He is famous for (true story) telling a reporter who asked him "how do you deal with difficult engineers".. his answer "I fire them!". My best story about him was our sailing club. I was invited to join the Seagate sailing club. They had something like a 35 foot Catalina sailboat for company use, totally free. We ended up sailing that every Wednesday in the regular race at Santa Cruz Harbor. It was owned by Allen. On one of those trips, after much beer, the story of the Segate sailboat come out.
Allen didn't sail or even like sailboats. He was a power boater and had a large yacht out of Monterrey harbor. He rented a slip in Santa Cruz, literally on the day the harbor opened, and rented there since. The harbor was divided in two by a large automobile bridge that was low and didn't raise. The clearance was such that only power boats could get through, not sailboats (unless they had special gear to lower the mast). That divided the harbor into the front harbor and back harbor.
As more and more boats wanted space in the harbor, and the waiting list grew to decades, the harbor office came up with a plan to manage the space, which was "all power boats to the back, sailboats to the front", of course with an exception for working (fishing) boats. They called Allen and told him to move. I can well imagine that his answer was unprintable.
Time went on, and their attempts to move Allen ended up in court. Allen felt his position as a first renter exempted him. The harbor actually got a law passed in the city to require sailboats to move to the back, which (of course) Allen termed the "Allen shugart rule".
Sooooo.... comes the day the law goes into effect. The harbormaster calls Allen: "will you move your boat". Allen replies: "look outside". Sure enough, Allen moved his yacht to Monterrey and bought a used sailboat which was now in the slip. Since he had no use for it, the "Seagate sailing club" was born. It was not the end of it. The harbor passed a rule that the owners of boats had to show they were using their boats at least once a month. Since Allen could not sail, he got one of us to take him out on the boat, then he would parade past the Harbormaster's office and honk a horn and wave.
Of course Allen also did fun stuff like run his dog for president. In those days you either loved Allen or hated him, there was no in-between. I was in the former group, in case you could not tell.
I was actually one of the lucky ones. I saw the writing on the wall, that Segate would move most of engineering out of the USA, and I went into networking for Cisco at the time they were still throwing money at engineers. It was a good move. I ran into many an old buddy from Seagate escaping the sinking ship later. Living in the valley is always entertaining.
5
-
5
-
5
-
5
-
5
-
Its a good question (further research into hard drives). They are still doing some amazing things, advanced magnetic materials, layered recording, etc. However, the basis of the industry is electromechanical, which means it is inherently slower and more complex than SSDs. You can only move a mass (head arm) so fast.
The recent research in disk drives has gone mainly to increasing their density, and therefore reducing cost. Because this does nothing to help the speed disadvantage of HDDs, this trend will actually accelerate the demise of the HDD industry, because it accelerates the trend of HDDs towards being a backup medium only.
HDDs cannot get any simpler. They have two moving parts: the head and the disks, and both probably spin on air now (certainly true of heads, not sure about spindles). Because HDDs are more complex and take more manufacturing effort than SDDs, the cost advantage of HDDs is an illusion. The fall is near.
5
-
5
-
IC masking and screen printing: Well, I think more accurate to say that these techniques were well known from the manufacture of printed circuit boards, which were in full swing at the time of the first ICs, and from there you get back to printing, both screen printing and lithography. Also, resists were in use before ICs, used to perform etching on metal, rock and other surfaces, which is very much a thing today. In fact, etching glass with acid, still done today, is almost a direct line to ICs, since silicon dioxide is basically glass.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
@real_hello_kitty Well, people resources are %50 of your career. I suspect the employers you think are smearing you are not going to do so. These days employers are very worried about litigation, and managers are told to stick strictly to the facts. My references are from coworkers that I know had experiences directly with my work. My dad (also an engineer) said it best: if you worked for a company for 2 years or so, that says a lot regardless of what your previous manager(s) say. Why did they keep you working there if you weren't producing?
Finally, here is an ace of spades for you, that is a common industry trick. Get a friend to act as a potential employer and call up your previous manager(s) to ask what they think of you. This will give you a very solid overview of where you stand.
I have been an engineer for 40+ years. I have had a few employers that I didn't want to use as a reference. So I didn't. I left them off my resume.
There is one thing that isn't in doubt. People are important. I have been at companies that were failing, but what I got out of it was far more important than a job. Good people, and good contacts. If you realize how valuable they are, you can work on developing your contacts. It works both ways. People call me after their last job ended, and I try to get them started on a new position, with my company or another.
4
-
4
-
4
-
4
-
4
-
Fatboy: a Bolt vs. Volt review would be a good idea. But sorry, could not agree with your logic less. Lets take the Volt vs. bolt as a *second car*. For dropping that gas engine in the Volt, you get 150 miles more of range. That means for my wife, charging about once a week (she has a very short commute) and she never worries about range. With the Volt, she would be back to nightly charging.
The Volt may make sense as an only car, but not as a second. I have friends with a Volt, and their thinking appears to mainly be centered around never having owned or leased a true battery electric car. The "I'm going to get stuck" thing is an imaginary fear. I had a Leaf for 3 years, had the supercharging option, and used a supercharger perhaps 3-4 times a year. I "got stuck" exactly once, a couple blocks from home, because I tried to push the range too far, and that was mainly because I was going through a side of the city that has no real charging capability.
With a Bolt and a TM3, my ideas about charging and range are completely different, and realize I drove the Bolt for a year and a half before getting the TM3. On these cars I think about charging when I see 50 miles of range, so my "don't go below" range is most of the entire range of my old leaf and Spark which both got 75 miles of range. In both cars you can afford to keep that kind of reserve, and in fact its better for the car not to be nearly discharged as I used to do for my Leaf.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@jfbeam Yea, semiconductors tend to fuse closed, unlike most circuit boards, which tend to fuse open, or, as you say, contacts, which tend to fuse closed.
When I worked in ICs we had a failure analysis presentation that showed why, complete with photos. Silicon... well, it melts with enough current. It forms a nice runny, and very conductive, river of molten silicon for an instant. I imagine that is right before it forms a miniature explosion. You kids won't remember, but we used to have UV erasable EPROMs with little windows in them. If you plugged them in backwards, cause, you know, stupid, or on purpose, you got a nice little light show. This probably accounted for the fact that bad ones never got kept around. They were too much fun to plug in backwards.
4
-
4
-
4
-
4
-
4
-
@tcmtech7515 I do hear that a lot... from people who don't own an EV. The standard refrain is that "I'll buy an EV when it gets 400 miles range", even though 200 miles was a common range for gas cars before high mileage hybrids. 400 miles, or likely 100kWh batteries, is an inefficient battery to haul around every day when average needs are 50 miles a day or so. It comes down to the story people tell themselves. "30 minutes for a fast charge is too much" means to me that people are trying to fit EVs into what they know about gas cars. I have owned EVs since 2013, and I average 2-3 times per YEAR when I wait at a charger. The rest is home charging when I don't really track, know about or CARE much about how long the car took to charge. Everyone I know who actually GOT an EV is pretty much the same. All the concerns they had before getting an EV are gone AFTER they actually get one.
I went from leasing a leaf, to a Bolt that my wife now drives, to my Tesla M3. All were practical, but all differed in terms of range and convenience. A 75 mile range leaf was practical as a commuter car, but not for long trips, and took planning to be able to run significant errands during lunch. I charged it every night, and would have issues if I forgot to plug it in at night, usually requiring a stop off at a supercharger.
The bolt changed that model to only needing to plug in 2-3 times a week, and made long trips possible, if not convenient. It took it from San Francisco to LA once after I got it, which required planning and some fairly major stops to charge along the way.
With the Tesla M3 there is no issue with long distance driving for me at all. I don't drive for 300 miles without stopping, which is 4 hours even at California highway speeds, so the 20-30 minutes it takes to charge can be taken during lunch or dinner breaks, and nowadays I pull into Tesla highway charging centers with 10 to 20 charging spots, most unoccupied.
4
-
4
-
4
-
4
-
4
-
4
-
@bunyu6237 A couple of reasons (better software than hardware). First of all, there is a larger group of people working on software than hardware, so the jobs are more plentiful and the demand greater. Second, hardware/software crossover people are considered odd birds, and when I used to do that I had people literally telling me to "pick a side", go one way or the other. I find it easier to get and do software projects, and the pay is better. I dabbled in Verilog long after I stopped being paid for hardware design, and I realized it would take a lot of work to get a foothold in good Verilog design with virtually no corresponding increase in salary, and more likely a decrease for a while during the time I gain credibility as a Verilog designer. The last time I was paid to design hardware it was still schematic entry (and yes, in case you haven't figured that out, I am indeed that old).
Of course, a lot of this is my personal situation. I am not sure any of the above would serve as career advice. I definitely consider my hardware background to be a career asset, since specialize low level software design (drivers, embedded, etc). Having said that, I keep up with hardware advances and have often dreamed of uniting my Verilog experience with software experience. That dream is unrealized.
4
-
4
-
4
-
Good job. There was a bit of conflation there with microcode (its firmware?). It would have helped to underline that it is entirely internal to the chip and operates the internals of the CPU. In any case, microcode was discarded with the Pentium series, KINDA. It actually lives on today in so called "slow path" instructions like block moves in the later cpus, which use microcode because nobody cares if they run super fast or not, since they are generally only used for backwards compatibility and got deprecated in 64 bit mode.
I await the second half of this! Things took off again with the AMD64 and the "multicore wars". Despite the mess, the entire outcome probably could have been predicted on sheer economic grounds, that is, the market settling into a #1 and #2 player with small also-rans. Today's desktop market, at least, remains in the hands of the x86 makers except for the odd story of Apple and the M series chips. Many have pronounced the end of the desktop, but it lives on. I have many or even most colleges who use Apple macs as their preferred development machines, but, as I write this, I am looking out at a sea of x86 desktop machines. Its rare to see a mac desktop, and in any case, at this moment even the ubiquitous Mac pro laptops the trendsetters love are still x86 based, although I assume that will change soon.
Me? Sorry, x86 everything, desktop and laptop(s). At last count I have 5 machines running around my house and office and 4 laptops. I keep buying Mac laptops and desktops, cause, you know, gotta keep up with things, but they grow obsolete faster than a warm banana. Yes, I had power PC Macs, and yes they ended up in the trash. And yes, I will probably buy Mac M2s at some point.
4
-
4
-
4
-
4
-
4
-
4
-
You characterize this as a transfer from the young to the old, but seniors paid taxes for this throughout their lives. Is the outcome unequal to income? Sure, but that is government mismanagement of the money. Any reasonable calculation shows that the private sector could have done better with that money. What governments got out of the system, which nobody talks about, is a guarantee that the old won't end up on the welfare rolls, and the street.
I am a standard American retiree. I reached full retirement age, filed for social security, and kept on working. The SS money is not enough to live on unless I sell all of my assets. In the meantime, I work, I am perfectly able to, and I both pay into the system as well as get money from it.
4
-
On my best projects, I keep the tests, the code, and the compete report generated by the test, which is time and date stamped, in the repo. When I worked at Cisco Systems, we went one better than that and kept the entire compiler chain in the repo, including compiler, linker, tools, etc.
I teach the init-test-teardown model of individual tests, and one of the first things I do when entering a project is mix up the order of the individual tests in the run. This makes them fail depressingly often. Most test programmers don't realize that their tests often depend inadvertently on previous test runs to set state in the hardware or simulation. I do understand your point about running them in parallel, but admit I would rather run then in series, then mix them up in order. Why? Because running them in parallel can generate seemingly random errors, and more important, aren't repeatable. I would run them in order, then mixed order, then lastly in parallel because of this.
Finally, many testers don't understand that testing needs to be both positive and negative. Most just test for positive results. Testing that the target should FAIL for bad inputs is as important, or I would say MORE important, than positive tests, since they go to the robustness of the system. Further, we need to borrow concepts from the hardware test world and adopt coverage and failure injection.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
In the very early days, you were either in the S-100 (Altair compatible) camp or outside. The Altair and other copies were extendable with plug in cards, and the number and type of those cards multiplied enormously. The first cards where build it yourself, but at some point more cards were shipped prebuilt. The S-100 standard evolved into 16 bits with 24 bit (16 meg) addressing, with various processors, including the 8086 family. The Altair itself fell behind the market it started, gaining a reputation for being less reliable than others.
When the IBM-PC shipped, it began to replace the S-100 family, but it took several years for that to happen. I used a S-100 compatible up to 1987, mainly because the IBM-PC was not a more powerful computer until the 80386 came out, and even then didn't get a OS powerful enough to actually use the 32 bit features of the CPU until 1992-1995 (Unix, OS/2 and Windows 95). I made up the difference, like many people did, by using "extenders", systems that used the 16 bit DOS/Windows OS, but hosted a 32 bit program on the 80386.
3
-
3
-
3
-
3
-
3
-
3
-
By the way, I am conservative and don't regret the dropping of bombs on Japan, and even I don't believe the common explanation of nuking Japan. What the dropping of the bombs did is convince Japan that we could move from fighting a conventional war to fighting an almost "free" war that involved no men or materials, but simply flying over the island of Japan and slowly reducing it to a glowing pile of rubble. Remember that in those days nuclear bombs were not that powerful, and it would have taken dozens of nuclear weapons to bring Japan to its knees.
The dropping of the second nuclear bomb was necessary because Japan, which by the was NOT stupid or without spies and probably knew something about the a-bomb before it was dropped on them, had reason to doubt that we had more than one of the weapons. They would have been right. In fact, to drop a third one would have been a problem for us.
The idea of being able to fight wars with pushbuttons was short lived. With proliferation of nuclear weapons, even to just the USSR, meant that even the "unthinkable" conventional wars would be fought under a nuclear umbrella and in fact by proxy, as was exactly what happened. Thus the cost of war actually went up, not down (as it should be).
As for the current crop of crybabies wringing their hands over nuking Japan, the answer is simple. Nuke 'em again. They have been getting uppity of late, and that would put an end to this whole argument.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I think of software design as a parallel to architecture. It has merged with art a bit, and heavy on engineering. There are objectively "good" buildings and "bad" buildings, but over the centuries, we have come to understand that poorly designed buildings fall down and kill people, a lot of them.
Software today is divided into life critical applications and non-life critical applications. I have worked on both (medical applications). The problem is that there is not enough recognition that software projects fall down. Our complexity is simply out of control, and many projects end when the software has too many bugs and not enough understanding. Programmers move on; the code was not that well understood to begin with. Most software isn't designed to be read. Printed out, its only useful in the toilet, which dovetails nicely with today's idea that software should not be printed. In the old days (1960s era), it was common to keep programs in printed form, usually annotated by the keeper. If I dare to suggest that a given bit of code is ugly, I am told that nobody is ever going to look at it, and it is going to be discarded shortly in any case.
If we are engineers, we are a funny bit. Electronic engineers don't produce schematics that are messes of spaghetti without much (or any) annotation. Same with mechanical engineers, or (say) architects. I'd like to say that software is a new science, and we are going to evolve out of this phase, but I don't think I will see it in my lifetime.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
You have tapped into a classic boondoggle here. When I was with Cisco around 2000, the big "new wave" was about all optical switching, which was supposed to be faster than converting optical to electronic and back again, often with DMMs (Digital Micro Mirrors). Howed that work out? The startup world was littered with smoking holes of failed companies.
I think the bottom line is we know a lot about devices that operate on electrical signals, but not so much about devices that work on pure light. As in everyone knows what an electrical nand gate is, and optical nand gates are possible with optical signals, but good luck getting that to work, be integrated at high densities, and be efficient. Lets start with the basics. You can route signals easily in electronics, and the 10+ layers of interconnect on current ICs talk to this. What's a light conductor on an IC? Well, air, which should be free, but is far from it. You would have to couple in and out of the IC at many points, which is expensive in terms of real estate. You could conduct with glass, and that is a whole 'nuther level.
I'm not saying never. I'm just saying that with any breathless new wave of technology you have to look at history and see if that wave has not broken previously, or like every 5 years or so (cough.... AI).
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
That sampling rate demo was worth the price of admission, well done.
PCM or pulse code modulation should have nothing whatever to do with sampling. Pulse code modulation means to represent audio with simple on and off waveforms, ie., pure binary digital sampling. It was invented for the bell system, and means that instead of representing audio as a series of values, say 0-255 as for 8 bit sampling, you represent it as 0 or 1 values but represent the frequency information by increasing the frequency of the pulse transitions. There is no other meaning of the word "pulse", which is a digital term, not an analog one.
PCM got hyjacked a as a general term for analog to digital convertion, perhaps because IBM-PCs used to be capable only of pulse code modulation of the speaker. There was not A/D converter, programs made all sounds via pure on/off waveforms or squarewaves, and there were even some programs that attempted to make crude speech via this method. I note that you are calling this PDM in the video, but, again, the word "pulse" is unambiguous. It does not mean A/D conversion.
Sometime I would like to research when and perhaps why the term PCM got misappropriated.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I think the fact that Unix was written in C had a lot to do with the language's rise. It is impossible to know just how big a factor that was. By the time I got ahold of a Unix implementation (Unisoft on the 68000, 1982) I had written a stand-alone disc operating system in assembly that was modeled on Unix, or at least the notion of Unix universal serial I/O drivers (which Unix didn't and does not, in Linux, actually use, but BSD Unix does). The fact that Unix was written in HLL was very compelling. Computers and systems, even up to the PDP-11 were quite compact back then, and so was Unix. The PDP-11 had a 16 bit address limit, in bytes, even though it was a 16 bit machine, because it was byte addressable. If you look at the original source for Unix back then, you would be amazed both at how compact it was as well as how many odd tricks of the C language were used.
In any case, in the 1980s the two main systems were Unix and the up and coming Windows franchise. The PC was mostly assembly then, but by the end of the 1980s was mostly C. Throughout the 80s and into the 90s Unix was considered a "real" operating system and Windows/DOS/Apple to be toys by comparison. Windows NT and then Mac OS X changed that, and by then we were firmly in a C based world.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@lorabex791 1. I work at Google, so no. As in they may be trying to do it somewhere, but not here. We still use it. Whatever people think of C++, they worked very hard to make sure it was efficient. 2. Drivers are different than kernel code. The majority of drivers are created in industry (I have worked on both disk drivers and network drivers), and a lot of it is C++ nowadays, which is the preference. There are a lot of Windows drivers in C++, and it is customary to borrow code between windows and Linux drivers. 3. I wasn't talking about replacing any code. This is about new drivers.
I don't have a dog in this fight. As a driver/low level guy I do most of my work in C, but increasingly I have to do C++ for higher level stuff. Google loves C++ (despite what you heard). Rust is "said" to be gaining traction at Google, and I have done deep dives in Rust, so I'm not (completely) ignorant in it.
Any language that claims to be a general purpose language, IMHO, has to have the following attributes:
1. Generate efficient code.
2. Interoperate with C since that is what virtually all operating systems today are built in. This includes calling into C and callbacks from C (I don't personally like callbacks, but my opinion matters for squat these days). [1]
Rust is fairly good at calling C, and gets a D grade for callbacks. Golang is basically a non-starter, because they have such an odd memory model that interactions with C are expensive.
Any language can interoperate. Python can call into C, and its an interpreter. Its about efficientcy.
Obviously its a moot point at this time, since "Linus an't a gonna do it", but it should be discussed. C++ is too important a language to ignore, especially if rust gets a pass.
Notes:
1. There are some rocks in the stream of C, including zero terminated strings and fungible pointers (pointer parameters that are NULL for "no parameter" and small integers for "not a pointer"). Most languages outside of C do not support some of all of this. These are bad habits in C and are eschewed these days, see the various arguments over strcmp vs. strncmp.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@ttb1513 The last company I work at silicon design for, Seagate, was emblematic of the basic problems with ASICs. It was 1995 and they were new at ASIC design in house. And they weren't very good at it. It was random logic design before Verilog. An example of this was our timing verification. We had timing chains that were too long for the cycle time, and thus they simply alllowed for the fact that the signal would be sampled in the next clock cycle. Now if you were an ASIC designer back then, what I just said would have made you reach for the tums, if not a 911 call for cardiac arrest. Its an open invitation to metastability. And indeed, our AT&T fab guys were screaming at us to stop that. I got put in charge of hardware simulation for the design, and I have detailed this fiasco in these threads before, so won't go over it again.
The bottom line was that ASIC process vendors were loosing trust in their customers to perform verification. The answer was that they included test chains in the designs that would automatically verify the designs at the silicon level. It mean that the silicon manufactured design would be verified, that is, defects on the chip would be verified regardless of what the design did. My boss, who with the freedom of time I can now certify was an idiot, was ecstatic over this new service. It was a gonna fixa all o' de problems don't ya know? I pointed out to him, pointlessly I might add, that our design could be total cow shit and still pass these tests with flying colors. It was like talking to a wall.
In any case, the entire industry went that way. Designs are easier to verify now that the vast majority of designs are in Verilog. I moved on to software only, but I can happily say that there are some stunning software verification suites out there, and I am currently working on one, so here we are.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Get over it. Real astronomy is moving into outer space in any case, with observatories on the moon or at the Lagrange points. Musk has been very cooperative about painting the satellites to minimize interference. We need a system like starlink, and leos will eventually give cellphone service around the world as well.
As far as collisions go, the "space network" satellite constellations have very regular patterns of travel and can be predicted pro grammatically. Indeed, that's how these systems work, ground stations can predict were satellites will be. Space is a big place, and even just limiting it to the sphere of a launch window, satellites are not difficult to avoid.
The idea of LEOs contributing to space trash is unserious. LEOs are self limiting, and indeed, that and the low altitude is what makes LEO attractive. LEOs will, in time, replace mountaintops as the locations of choice for communications, and this has already paid huge dividends for society (like GPS). Deeper space satellites are far more of a concern.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I pay taxes for those ^& streets, for those sidewalks, those parks. I paid that so everyone can walk there, enjoy the parks, drive their cars. I didn't pay for them to house the homeless, be a drug shooting gallery, or anything else. That takes away my rights, what I paid for, and hands it to a few people who squat there and make it impossible for anyone else to use and enjoy. I get so tired of people telling me those homeless have mental issues. Yes, if you do drugs all day you are going to have mental issues. If you get locked up, you can't shoot up.
2
-
2
-
2
-
I have had a Tesla since 2018, and a Bolt since 2017. I have two chargers, one Tesla at 48 amps and one J1772 at 30 amps. They are on individual drops from the panel, so they can both be used at the same time. I rarely use the Tesla charger now, because I get a charge for free at work, so why not. I could do a bunch of calculation to get the best range, or I could do what I do, which is just drive fast and don't worry about it. There are Tesla chargers everywhere in California, so its rare I need to extend range. 275 Kw chargers are common here, and non-Tesla chargers are here at 350 Kw, yes, more than Tesla. You can get a charge in 15 minutes, but frankly I rarely do that. Even just stopping for coffee more than covers 20-30 minutes. Even at my wasteful use of power, it cost $20 for the trip I just made from San Francisco to LA. Try that with a gas car.
The point is, charge fast, drive fast, be happy. EVs at WORST are far better than gas cars at their BEST. And everyone is going to be driving that way in their EVs very soon anyways.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I have had similar experiences playing against a computer. At first you lose, then you find out what the weakness is of the computer and you can form a strategy. For example in the quake style shoot em up, you can often win by instead of taking the "power pill", which only lasts for a while, you can simply wait in a corner for the computer players to take it and shoot them as they try.
When Gary Kasparov played the computer for the world championship, I am convinced he was doing the same thing, finding out the weaknesses and learning to play against them. At one point he accused the computer makers of changing the program during play, and whether true or not, I believe he thought the computer had changed strategy for no apparent reason. In fact, by his own description, he was learning to make a series of apparently senseless moves before pressing an attack to confuse the program.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
And so the conversation degenerates. What a surprise.
Let me tell you a story. I grew up in LA. Once I was walking down the sidewalk, and three black kids were walking the other direction. I moved to the right side of the sidewalk, and they formed a line to walk abreast, blocking the sidewalk, and then some. I stepped into the street and waited for them to pass, glaring at me as they did so.
This was the 1960s, a bad time in LA and elsewhere. I was 10 years old, so you will forgive me if I didn't understand the meaning of it all. But then, the kids who wanted to teach me a lesson about black rights were, in LA, highly unlikely to have experienced the need to cross the street to the other side to avoid a white person, which happened in the deep south.
So you will forgive me for pointing this out, but this was an example of people protesting a problem they had never experienced, and putting that protest against a person, me, who never had done anything like that, and in fact didn't even know what was going on. The reason I recount the story is that it is very emblematic of what is going on now.
What we see on TV now is the rioters (and they are rioters) burning down businesses. And if your channel feels the need, you will also see the aftermath. A sad man or woman walking through a burned out building and talking about how they will (or will not) rebuild their family business.
But they are capitalists right? They have it coming. Scrape the politics off that argument and you will see small business owners who lost their livelihood. Some of them black (as if that matters).
So whats wrong with a tooth for a tooth? After such an unfortunate event as we witnessed --- fair to shoot down white people at random? Or less extreme, burn their houses? Their cars?
No, its businesses. That store. The car dealership. Burning and looting is reparations. And on and on.
So why then? Because they are easy targets. They are downtown, and nobody (or most people) are going to have sympathy for them, because they are rich capitalists, right?
Actually, you would not be wrong for thinking that in general. Smart business owners have insurance.
But at the end of the day, they are going to move out. Use the insurance to start again -- elsewhere.They have been punished for things they had no part in (unless capitalism itself is guilt). So they are going to take it as an act of god and get out of town.
Now ask yourself who is helped here? Do business owners have a responsibility to rebuild and take it? Should they just accept higher insurance premiums? Is a large company, a grocery store, legally obligated to do business in the public interest?
2
-
The first problem with UBI is that the politicians will balk at making it "universal", ie., why pay it to rich, or well off, or even moderately high income earners. Thus it rapidly devolves to just being another welfare system. The second problem, very much related to the first, is that, again for political reasons, it becomes a "soak the rich" plan, which does not provide enough money to support it, but also begins to erode investment. Since politicians (despite evidence to the contrary) are not stupid, and rich people can afford tax lawyers, it brackets down to the very people who are supposed to get benefits from UBI. And then it becomes a system where the government takes your money, then gives it back to you minus their "cut".
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Small nit: if transistors had never been invented, vacuum tube technologies would have followed the same curve. By the time transistors arrived, tubes were being miniaturized, and we had invented cold emitters, tubes that didn't need a heated filament. In fact, integrated circuits using vacuum technology appeared later. They were proposed as an alternative flat panel display technology.
Extra bonus nit: It was true that early vacuum tube computers tended to operate for short times because a tube would burn out. However, the Univac series of computers, in a company founded by the inventors of the stored program computer, Eckert and Mauchly, fixed the tube lifetime issue by "derating" the vacuum tubes, that is, running much less filament current through them than normal, thus prolonging their lifetime and enabling the machine to solve serious computing problems.
The next most difficult problem in computing, that of creating a large memory store, was solved, ironically enough, by neither vacuum tubes nor transistors, but by little magnetic donuts known as magnetic core memory. Thus computers reached much of their current form before transistors arrived.
Taken together, magnetic memory, magnetic tape and magnetic discs formed a huge part of what made computers successful through the end of the century and beyond. I guess it does not sound as sexy to have a title like "Rust: the most significant substance of the 20th century" :-)
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@AlexanderSylchuk Oh, you are begging for my favorite story. I interviewed with a company that made precision flow valves. These were mechanical nightmares of high precision that accurately measured things like gas flow in chemical processes. This is like half the chemical industry (did you know a lot of chemical processes use natural gas as their feed stock?). Anyways, what has that got to do with this poor programmer? Well, like most industries they were computerizing. They had a new product that used a "bang bang" valve run by a microprocessor. A bang bang valve is a short piston that is driven by a solenoid that when not energized, is retracted by a spring and opens a intake port and lets a small amount of gas into a chamber. then the solenoid energizes, pushes the piston up and the gas out another port. Each time the solenoid activates, a small amount of gas is moved along. Hence the "bang bang" part. If you want to find one in your house, look at your refrigerator. Its how the Freon compressor in it works.
Ok, well, that amount of gas is not very accurately measured no matter how carefully you machine the mechanism. But, it turns out to be "self accurate", that is, whatever the amount of gas IS that is moved, it is always the same. The company, which had got quite rich selling their precision valves, figured they could produce a much cheaper unit that used the bang bang valve. So they ginned it up, put a compensation table in it so the microprocessor could convert gas flows to bang bang counts, and voila! ici la produit! It worked. Time to present it to the CEO! The CEO asks the engineers "just how accurate is it?" Engineer says:
well... actually it is more accurate than our precision valves. And for far cheaper.
The story as told me didn't include just how many drinks the CEO needed that night.
So the CEO, realizing that he had seen the future, immediately set into motion a plan to obsolete their old, expensive units and make the newer, more accurate and cheaper computerized gas flow valves.
Ha ha, just kidding. He told the engineers to program the damm thing to be less accurate so that it wouldn't touch their existing business.
Now they didn't hire me. Actually long story, they gave me a personality test that started with something like "did you love your mother", I told them exactly where, in what direction, and how much force they could use to put their test and walked out.
I didn't follow up on what happened, mainly because I find gas flow mechanics to be slightly less interesting than processing tax returns. But I think if I went back there, I would have found a smoking hole where the company used to be.
And that is the (very much overly long) answer to your well meaning response.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
The technology is tied to the state of battery technology. The reason that EVs are impractical for aircraft use is the weight of the batteries. Not only do they weight much more than the fuel they replace, but, unlike liquid fuel, they don't get lighter as they fly due to consumption of the fuel.
If/when batteries improve significantly, that calculus will change. The other factors are:
1. Noise. Most of the noise in an aircraft is the blade noise. This is why (for example) yard leaf blowers aren't really much quieter than gas powered ones.
2. Reliability. Sorry, they comments here are just wrong. The reliability of electric motors is far greater than piston engines, and compares more directly to turbine engines.
3. Power. The horsepower output of an electric engine is significantly greater than a piston engine, and exceeds even a turbine engine per weight. This allows an electric aircraft to use multiple engines and thus gain even better reliability.
Again, it mainly depends on battery technology advances. Most of the stock bets on Evtols have been about the idea that batteries will significantly improve in a short time. They actually have. They have about doubled in the last two decades, but then the battery chemistry has changed radically, from lead-acid to lithium. It is not certain we will see that much improvement in the next two decades. Also, the majority of research being done now is to reduce the cost of batteries, not so much to reduce the weight, which is not as important for ground vehicle use.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
The problem in Unix/Linux is that, for a host of reasons, users are encouraged to take root permissions. Sudo is used way to often. This breaks down into two basic issues. First, users are encouraged to install programs with privileged files or files in privileged areas. Second, in order to fix up problems, it is often necessary to modify system privileged files. The first issue is made far worse by effectively giving programs that are installing themselves global privilege to access system files and areas. Its worse because the user often does not even know what areas or files the program is installing itself in.
The first issue is simple to solve: install everything a user installs local to that user. Ie., in their home directory or a branch thereof. The common excuses for not doing this is that "it costs money to store that, and users can share", or "all users can use that configuration". First, the vast majority of Unix/Linux system installations these days are single user. Second, even a high end 1tb M2 SSD cost 4 cents per gigabyte, so its safe to say that most apps won't break the bank. This also goes to design: a file system can easily be designed to detect and keep track of duplicated sectors on storage.
The second issue is solved by making config files or script files that affect users local, or having an option to be local, to that particular user. For example, themes on GTK don't need to be system wide. They can be global to start but overriden locally, etc. A user only views one desktop at a time. The configuration of that desktop does not need to be system wide.
My ultimate idea for this, sorta like containers, is to give each user a "virtual file system", that is, go ahead and give each user a full standard file tree, from root down, for Unix/Linux, BUT MAKE IT A VIRTUAL COPY FOR THAT USER. Ie, let the user scribble on it, delete files, etc., generally modify it, but only their local copy of it. The kernel can keep track of what files are locally modified by that user account, akin to copy on write paging. You can even simulate sudo privileging so that the system behaves just like straight Unix/Linux, but only modifies local copies, etc.
2
-
2
-
2
-
2
-
2
-
Just one small thing: I lived through the implementation of proposition 13. The issue at that time was the increasing value of property, and the corresponding increase in property taxes, was throwing retired people out of their own property that they had paid off. This was because they lived off fixed income, and the property tax was becoming unpayable, resulting in the sale of property. Good for new residents buying property, bad for retirees. Because older people vote more than younger ones, this resulted in the "taxpayer revolt" and the passage of prop 13. Would that pass today? A good bit of the exodus from California is due to retirees moving out of CA. Not from property tax increases, but because all of the costs, including taxes has risen, and the differential of property values in CA vs. other states, makes it attractive to move away. Thus fewer people would vote for a prop 13 today.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
This is the liberal view of history. The "solid south" existed up until the 1960s, and republicans were active in civil rights legislation throughout that time. The party that changed were the Democrats, who came to a head with George Wallace (yes, democrat). LBJ went against his party against much of his own party and joined with the Republicans to pass the civil rights act, and what was effectively an apartheid state in the south collapsed. Democrats did an about face and changed from being the party to suppress blacks to being the party of socialism, with a laundry list of social engineering acts modeled after what FDR attempted to do during the great depression.
This is the important point. The republicans believed, and believe in basic freedoms and limited government throughout the time from Lincoln to the present (Lincoln believing that the basic freedoms of man extended to blacks as well). The Democrats believe virtually all problems have a government answer, as expressed through FDR and then the rise to socialism with LBJ. This brought a backlash with Reagan and a huge rollback of the regulative state, and a huge boost in the economy.
With the war, Obama and a huge snap back to socialism and fall in the economy, once again the forces on the right are feeling revolutionary. However, unlike the last time when there were clear leaders, we have Trump. Trump is a symptom of the deep divisions in this country. The democrats have done the USA a great favor by all but declaring intent to carry the flag of the socialist party (when Obama was first running for office, the head of the French socialist party declared that "the way you run as a socialist in America is by proclaiming you are not a socialist").
This fight, now so clearly defined between socialism and capitalism, big government and limited government, will come to a head, but I suspect not in this election, which is more like Huey Long vs. Teddy Roosevelt than Reagan vs. Carter. Perhaps the next election.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I resonate with your comments about being given low value work. I left a job at Intel where they tried to downgrade my position from a developer to tester and I refused. It was during Covid, and I got two contracts back to back with Apple and Meta. High paying and seemed good at the time, but in both cases it became obvious after starting the work that they had put me on low level work, debugging and testing, apparently with the idea that someone with what was clearly a high level developer background should be able to handle it easily. This was of course true, and I wasn't in a position to drop the contract(s), even though both companies had clearly misrepresented the position.
Long story short, both contracts ended with the complaint that I wasn't working fast enough for their tastes. And more interesting, better, developer level jobs in both companies were interested in interviewing me during time there, and I suspect rapidly disappeared when they found out that I was currently at a below developer position in the company despite my resume.
After the end of both contracts, I got a job with a startup last year that gave me valuable developer work, abet with a pay cut. I'm already drawing SSI and have an overpriced silicon valley house I can sell to get out of here, so the next move for me is to just quit working and move if the bottom falls out again. you kids have my every sympathy for what is going on now, but I will say that I have been a silicon valley engineer for 40+ years, and times when the employers have the upper hand has come and gone before. It will go again and the companies will realize that they got sold a turkey with the "AI revolution" just wait it out.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
So 5:00. Take fueling time for gas cars, take recharge time for EVs and extrapolate. This assumes people wait around to charge their cars like they wait around to fill their gas cars. I have an EV, the amount of time I spend charging my car is 0. 0 hours, 0 minutes, 0 seconds, because I have home charging. I come home, I plug in, I go about my life. Do I use on the road chargers? Sure, then I use fast chargers. So the right mix is slow chargers at home, fast chargers for trips. This rant from the video author assumes worst case.
So everyone does not have home charging right? Sure. There is no particular reason the vast majority of car users cannot do so. In Canadian north, cars have to plug in to heaters or they die, because it is cold. So you have a stall at your apartment that has a plug. Wiring is not that expensive, and most apartment buildings with covered car areas can be equipped with plugs. Note PLUGS. Giving everyone an internet connected charger controller is unnecessary and expensive.
Should everyone be forced to drive an EV? Of course not. But the issues with EV are way way (way) overblown. I charge my EV perhaps 2-3 times a week for my commute, and my employer has 6kw plugs. Thus I could charge without having any home charging at all.
We (the EV community) are very familiar with these lists of "why EVs can't work". They all have a common feature: the author does not have an EV and does not know how they work.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Microsoft could save themselves, eliminate a lot of their costs, and emulate the Mac/Apple model in less than a year by one simple step. Port their GUI on top of a Linux or BSD kernel. The result would still run windows programs, look like windows, remove a major cost item from Microsoft in one step, maintaining their own kernel and drivers. X windows, even Wayland, would be considered less high tech than the Windows GUI layer, and Windows would be as portable as Linux/BSD even if they keep the source code proprietary.
Windows does not rely that much on special features of its NT kernel any more than X does. It pushes files, it runs tasking, it has a driver model. Linux even supports the NT filesystem. With a couple of API layers over Linux, they could park the majority of Windows GUI code over the kernel. The Windows GUI could take over the video subsystem the same way it does now.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
"rebuilt...twice as fast with fewer people, %33 fewer lines of code, %40 fewer files 2x request/sec" you could say that about any rebuild project. When A project is reprogrammed, the programmers apply the lessons they learned from the first build to the second and come up with better solutions. The last claim is the most suspect. You are comparing a pure interpreted language (JS) vs a compiled language (Java). Whatever resulted in that speed improvement, it didn't come from the language itself.
Sorry, not trying to rain on your parade, but these claims are pure advertising fluff.
Second, sorry, your description of threads is just plain wrong. Threads are not expensive, and you don't "run out of threads". Most operating systems have no limit on the number of threads possible outside of the memory required. Furthermore, the threads don't "require new hardware". You can have as many threads as you like running on the same CPU. I suspect you are (inaccurately) describing the assignment of threads to different cores. This only really gives better performance if the threads are truly running in parallel, not awaiting I/O, which is the vast majority of the time.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
So Ford loses money on each car. And Tesla should as well, right? But Elon musk has made billions from Tesla. Its almost like Tesla knows what they are doing and Ford does not, right?
As an EV owner since 2013, I don't want the government to subsidize EVs, but then I never did. But then it also makes sense for the government to STOP keeping BYD OUT of the American market. The Ford and GM subsidies are PROTECTIONISIM and they are what is allowing Ford and GM to SUCK at making EVs while still staying in business. BYD is making cheap EVs for the world, the world outside of the USA. And keeping BYD out is part of the "we are stopping the bad Chinese" meme Biden (and before him Trump) are selling you. That bad Chinese company is selling you cheap cars simply because they are evil. Right.
Let the f***king market work and things will get better. Ford and GM may not survive, but they should be allowed to stand for fall of their own accord. Your government has shoved a truly impressive amount of money into Ford and GM to "keep American jobs", which is people making like $50 and hour to put a screw in a car. THAT's why Ford is losing money.
Tesla is standing on their own and competing head to head with BYD in their own market. And they could do it without subsidies. They do it by offering a better product. What a concept. I have had both a GM Bolt and a Tesla. Bolt is a nice car, but falls far short of a Tesla for the equivalent price. GM sucks at software, and I know why personally, I have seen their operations.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The 8086 series was a looser from the get-go. Software professionals preferred the cleaner architecture of the 68000 series, which did away with segmentation (a disease more than a technology). Indeed, the 8086 was in fact failing in the marketplace until IBM "rescued" it by using it in the IBM PC. That was the first crisis for the x86 family. The thing about segmentation is it is impossible to hide from higher level software. Even the language has to adapt to that hack, and indeed, C of the day had special features just to adapt to it. The result was that software was divided into x86 specific software and non-x86 software. Intel was happy about this because their users were non-portable. They doubled down on segmentation long after the size of chips enabled better architectures with the 80286, which could be explained as "you will like eating sh*t if we make it a standard". IBM again propped up the x86 family by releasing the IBM-PC/AT, which, even though it used the 80286, never saw wide use of a 80286 enabled operating system (cough OS/2). This carried x86 into the age of RISC. The x86 family entered its second crisis as improved 68000 processors, the Sparc and other RISC processors nipped at its heels. The introduction of the 80386 saved Intel, got rid of segmented mode, and allowed a Unix implementation of x86 for the first time. The next crisis for the x86 was when the series fell behind RISC processors in performance. Intel pulled off a genuine coup de gras by rearchitecting the x86 as an internal RISC that translated the crappy x86 instruction set to "ROPs" or internal RISC operations and made the CPU superscalar with Pentium. The final crisis for the x86 was when Intel tried to dump the dog of a CPU with Itanium, only to 180 again and join AMD with the "hammer" AMD64 bit arch.
Now we are in the age of RISC-V. The x86 has become like a case of herpes, bad, but livable, and seemingly never to go away. We'll see.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I believe this tech will work, but only if the fuel is sealed into the SMR at the factory, and it is returned to the factory at end of life, and the onsite operators have strict instructions not to mess with it. This removes the major cause of errors, namely human.
If this seems extreme, note that it is what we do in other industries. Modern jet aircraft are very complex, but nobody services the most complex element, the jet engine, on site. Engines are made easy to swap out and transport, and if anything even looks wrong with it, they swap the engine out for another and send the engine back to the manufacturer.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Redistribution, a nice word for taking people's money and passing out to others, can be called "socialism" if done by the government. Communism is Socialism achieved by force. Since, the last time I checked, nobody works hard to get income and then voluntarily turns it over to the government, socialism is a fantasy and communism is the actual system being advocated.
Are we "advancing past the need for economic systems of the past" as in, of course, capitalism? Humm. Socialism/communism is not a new system, has killed of massive numbers of people, has been "reintroduced" several times with a new coat of paint and ends the same: A little of it causes problems, a lot of it kills people. This is hardly an advance. It is hardly new.
I'm sorry you don't feel like working, but that's not my problem, I work for a living. Go try it and quit bitching.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
This is a fairly vacuous comparison. None of the BRICS nations are going to unite to form a force, and are in fact more likely to start wars with each other. Further, the real comparison, GDP per capita, is unmentioned here for the obvious reason: it is stacked in the favor of the USA, which is $57k, vs. Brazil $15k, Russia 26k, China $15k and South Africa $13k.
The USA spends more money than other countries on defense because we have better technology than other forces and are more effective than others, again per capita, and that relationship to per capita GDP is not an accident.
Finally, you have to consider the force equations of each of the respective countries. Russia and India devote a lot of their forces to counter china, but not each other, since they don't share a land border. China counters the USA in the pacific, but probably still reserves the majority of their forces for Russia and India, their bordering states. Brazil could effectively do without any military, since the only credible threat is Argentina, and nobody would take Brazil's territory even if it were offered free. South Africa has a large military but no credible enemies, which leads you to understand the real reason their large military exists, to control their own people.
India does not consider the USA as a threat, quite the contrary they consider the USA an ally. Thus the net force facing the USA is China and Russia, and neither state seriously considers the USA to be an invasion threat.
Thus I sleep well at night.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The problem with Italy and Greece and perhaps Spain was they saw integration with the EU as free money, and used it to shore up their finances that were overburdened with too large a government sector. You can hardly blame them for this. This was, in fact, the candy dangled by the EU to get them to join. Then the financial crisis came, and to the PIGS shock, they found out that they would actually have to pay back those loans. The fact is that if the PIGSs were not in the EU, it would be doing much better now. I am not saying that should happen. Germany, of all the EU states, should understand well what is going on. Just as Germany absorbed its eastern side, the EU formed itself by passing out money, in the belief it would pay off in the long run. I still believe it will, but then I am not paying for it.
1
-
1
-
"A/C systems are more economical than D/C".... welllll in old tech that was true. Transformers were easy to make, power came out of the generator in A/C, and the most common use of the power in the early 1900s was motors that ran on A/C.
Now fast forward to today. Your power plug shrank. What happened? Well, copper is expensive, so anything that shrinks that is cool. Plus, power is lost in the transformer, which was what was in that heavy brick that used to power your laptop. In addition, generally you use D/C power in most of your house now. All the electronics, those LED lightbulbs, etc. Yep, all D/C.
Electronics came to the rescue. Turns out if you use high power, high (er) frequency (than A/C at 60/50 hz) you can do the same power conversion with way way (way) less copper or even no copper at all. Plus, it is way easier to perform high power conversion now, even at high voltages.
Thus things are changing, rapidly. There is a good chance that many or even most lighting systems will go DC as distributed on DC feeder lines. This is already true in some large offices and industrial concerns. This is because a lot of the power used in LED lighting is used in the conversion from A/C to D/C. Want to prove this to yourself? Go find a screw in LED lamp in your house. Feel the glass where the light comes out. Now feel the base (don't touch the metal). The base is hotter isn't it? That is where the A/C to D/C converter is.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Exabyte was founded to record digital data on videotape for backup in 1985. They went fairly rapidly to 8mm video cartridges, as opposed to full size VHS, in 1987 (dates according to Wikipedia). The 8mm format had been introduced in 1984. DAT tapes were a similar if slightly smaller format, and indeed Exabyte bought my company, R-Byte, which made a DAT tape drive, in 1992.
None of that occurred in a vacuum. QIC tape drives were introduced by 3m in 1972 (again Wikipedia), and although they didn't store Exabyte level densities, they were from 20mb to 400mb in later versions. So video tape as data storage had a competitor.
Tape storage, going back into the open reel computer days, was generally used as computer backup on bigger machines and didn't come to PCs until the QIC days. Writable CDs were supposed to take over as a backup medium, but what actually happened is disc drives became so cheap for unit of storage that it made more sense to just purchase more drives and use them as backup, hence NAS or network attached storage and data vaults. Now, companies are just as likely to ship backup data to cloud companies.
1
-
1
-
hydrogen, hydrogen. First of all, hydrogen is not a fuel. It is simply a way to transfer energy. Second, by the time you solve all the issues with practical use of it in aircraft, pressurized vessels, the resulting weight of such vessels, insulating them, not being able to put them in wings, etc, etc., it does not look so attractive anymore. By the time the technology advances, electric batteries for aircraft use are going to be just as far along, if not farther. I suspect decarbonisasion of aircraft is going to proceed by dividing short haul from long haul aircraft, electrifing the former, then using a solution like plant derived fuel for the latter. The airline industry has been trained to think only in terms of long haul. If I want to go from my house, in San Jose, CA, to my kids house in Eugene, Oregon, a search on the airline sites gives a path through Denver. So I would need to travel half the length of the USA, then back again, just to reach another destination on the same coast. If you take out the hub and spoke model and stop treating people like cattle being shipped to market, and get people from their ACTUAL hometowns to their ACTUAL destinations (not to and from a HUB AIRPORT), you can do it with less travel time, less energy, slower aircraft, and make customers happy. There WAS an airline providing service from here to Eugene. Directly. From Alaska airlines. It used turboprop aircraft, and perhaps took twice as long as a jet would, but certainly less time than a tour of Denver would (and waiting for a connecting flight there). And that turboprop aircraft is possible to replace with a pure electric aircraft, including batteries in the wings.
The airline industry is like a shoe company that only sells size 6 shoes and works by hammering the shoes onto your feet until they fit (and then you limp out of there).
1
-
1
-
1
-
1
-
1
-
1
-
PSH is very scalable. The greens are against it. Same problem as always. More reservoirs, more opposition. Near me is the San Luis reservoir in California. It has no real natural inflows, it is fed by pumping water up into it from the aqueduct, and then letting it flow back downhill to generate power. Its a big battery, and it is also useful to store water. California desperately needs both facilities. We generate a lot of excess power from wind that is wasted, and much of the water that flows from the mountains here is simply dumped into the ocean. So why don't they build more San Luis reservoirs? Because in California, there is a large group of "green" lobbyists who are against it. A simple measure that would allow the biggest reservoir in the state, Shasta dam, to increase its height and thus its capacity (an existing reservoir!) ran into critical opposition. What do the greens want? They want Shasta gone, and all the rest of the reservoirs gone as well. ie., restore the land to natural state. Not enough water for people? Stop development so people move away. Etc.
The greens SAY they are for carbon free power, but then do everything they can to prevent it from practical implementation.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
California is an island, geographically and politically. I'm currently sitting in Kauai, which does not appear to care about waste of either water or power. The water surplus is obvious on Kauai if you are trying to stay dry here. The power surplus is largely due to solar power, which the power company made obvious by rejecting new requests from homeowners to connect their solar panels to the power grid due to overloading from the existing power surplus. So why isn't California the beneficiary of the same principles? For water they have turned down all efforts to modernize the water system and collect the water falling on the state, even though it is far in excess of the needs of California, even with the incredible waste of it by well connected agricultural interests. For power, California leads the nation in both power company solar sites and individual houses with solar, but manages to be on a upward spiral of prices for it even though solar is now cheaper than most power sources. The issue is our socialist government, which generated a huge surplus under Governor brown by dramatically increasing taxes and fees for everything under the sun. You pay fees here for sitting down at a restaurant vs. take out, you pay fees to buy computer monitors, and the purpose of that fee was to dispose of the lead in computer monitors. What lead you say? Yes, Virgina, computer monitors used to have lead in them. The commuter lanes put in to ease traffic are now being sold to the highest bidder. The list goes on and on. So California had a surplus from all of those fees and taxes. Where is it? Its in bureaucratic waste, government workers retirement and salaries, etc. It doesn't matter how much money the government takes in, it will spend that and more. CA has already instituted universal heath care via "Covered California" and will institute universal basic income when it thinks they can get away with it, despite the fact that any reasonable payments of UBI will still leave people homeless.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
There is no conflict in going back in time. Do a simple thought exercise. Travel to the future. You are doing that right now. Pick a target, say, an hour from now. You can choose several different futures. You could get in your car, go somewhere, rob a bank, shoot yourself, etc. Any number of different futures. Since there are N different futures you can pick from, it stands to reason there are N paths to get from any past to your present NOW. If you travel to the past, manage to find that past (out of all the possible pasts), that have a hitler ready to take on the world, and kill him, then you would not be coming back to this future again but a different one.
The difference is if you believe there is ONLY one past and ONE future. Since you already believe that there are N different futures, your belief that there is only ONE possible past is not a reasonable belief.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The plants on earth are capable of pulling carbon out of the atmosphere and restoring balance IF we stop dumping it back in. The plants were the ones that created the atmosphere in the first place, and deposited the carbon removed from the atmosphere below ground, storing both carbon and power. All of that originally came from the sun, and we can get it from the sun as well, though wind and direct sunlight. The only other power source on earth is nuclear, which we can extract from geothermal sources (yes, francine, that power came from nuclear sources inside the earth) or directly from fission or fusion.
In my opinion it is most cost effective, in the long run, is extraction via direct solar, indirect via offshore wind, and indirect via geothermal. Any use of carbon based energy from the ground is not renewable, and will eventually kill us. We CAN form a balanced and sustainable energy economy.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I call what is discussed here "traverse refactoring", that is, rearranging the code or adding routines to the code to support an upcoming feature, but that does not completely implement it. I break this down into two types:
1. Refactoring the code to make it easier to support an upcoming feature.
2. Adding routines/classes needed by the new feature.
Both of these don't break the code, and thus can be committed to mainline without affecting it. And perhaps equally important, these improvements can be removed if they don't work out.
The reason I call this traverse refactoring is from mountain climbing. If you are climbing up the face of a rock and realize it is going to be too difficult, you move sideways or "traverse" the rock face to find a position where continuing upwards is easier.
A few comments on other items described in the video:
Feature branching is an invitation to rebase hell. I have never been on a project with a significant amount of people (>3) where the code was not falling behind rapidly. This means rebasing, either frequently or all bunched up just before the merge.
Having "flags in the code"... this makes my head hurt. Two flags means 4 combinations. Three means 8, 4 means 16 combinations, etc. IE., you rapidly lose control of the codebase. Further, most of the code in a feature does not affect other code, meaning that you are only including it in tests (and in compiles if you are #ifdefing!) if the flag is on. Yes this method is common. No I am not a fan.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Lets see:
1. I don't like auto formatters. I have actually pulled down the source for GNU indent and modified it for site requirements, and then automated it for commits. However that was a company that had lost control of their source, with many programmers with odd styles who had left the company and left the source in very bad state. I personally am very picky about my code formatting, and the last thing I want to see is an auto formatter redo everything.
2. I love your point about finishing work. I should add that most of the time this is not a programmer issue, but a management issue. You want to get something working to show management/clients, and despite many warnings about the code needing improvement, management wants you to move on to the next project.
3. Building time into a project for documentation, debug and test: I think that needs to be accounted for. Nobody seems to build time in for debugging, even though that is %50 or more of the work. Scheduling for management with hand waving about needing extra time is a recipe for managers cutting down your schedule.
One story that touched on what you said is I had a fairly intensive parser to process design files. I have done compilers before, and have a very well defined set of procedures I use to Lex the file before processing it. I handed it off to another programmer to add a feature, and he was taking a serious amount of time to do it. I didn't want to micromanage, and he was not my employee in any case, so I just made the standard inquiries about how much time it was taking.
When he was done and handed over the work, he had rewritten all of the parsing front end using scanf() statements, an effort far in excess of the actual feature work required. Asked why he said "I didn't understand what you did".
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
This is classic compressive spalling. To such a compressive shock, a glass ball can be very hard. Recall that glass is actually a liquid, abet a very slowly moving one (there are examples of Roman glass vases that have slowly collapsed under a load). Thus for microseconds, the ball is, essentially, bulletproof. However, the impact sets up powerful shockwaves in the glass. It compresses, and then rebounds. That rebound contains enough force to break the glass off the surface in sheets (spalling), because the glass is more able to resist the force compressing it than the counterwave expanding outwards. The waves can go anywhere in the glass, including to the opposite side. The shape of the glass probably focused the waves to the opposite side.
I have observed a similar effect when hitting bottles with a lead pellet gun. The pellet is deformed in the shape of the bottle, and this matches what I saw, abet in very rapid succession. The lead pellet hits the bottle, is completely stopped, and drops off the bottle. However, the shock waves criscross the bottle, and it shatters from the force of the shockwaves. I observed that because I saw a short period of time between the time the pellet hit the bottle, and the time the bottle shattered.
1
-
1
-
1
-
1
-
1
-
1
-
I agree with most of what you said, but in our area, and probably most areas of the USA, the temperature, even on 90 degree days, drops into the 60s. It makes no sense to run air conditioning when the outside is that cold. You get the same effect, including storing cold air in the house, simply by drawing the cold air outside into the house. For that I have a "whole house fan" that actually goes one better. It both draws cold air into the house at night, and expels hot air from the attic at the same time. The result is that the house is considerably cooler by morning, and then you shut the windows to keep it that way. In this way, I usually manage to keep the house in the 70s until about 3pm even on 90 degree days. But then I turn the A/C on.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The "apartment dwellers cannot charge" thing is a canard. Most apartments have assigned spots, and I have never lived in an apartment building that didn't have covered parking and a light above it, meaning that it has electrical runs to the space. In Canada and other cold weather places, you have to have a plug at each space for the simple reason that if you don't plug you car's engine heater in at night, your car will be dead in the morning.
Cars can be charged from 110v, but 220v is obviously better. Although apartment owners will moan about the costs, running 220v drops to each space isn't going to break them. I'm amazed sometimes about how really useful a 220v/30 amp L2 charger really is. I have two long range cars, a Bolt at 238 miles and a Tesla M3 at 320 miles, and typically they charge up in 4 hours on a 6.6kW L2 charger, because I don't run them all the way to zero, nor is that a good idea. Both of my cars need charging perhaps once or twice a week even with my 40 mile round trip commute. Its not even necessary to purchase a $400 charger. My Tesla comes with a 220v charger free with the car, and its a reasonable cost with the Bolt. With that and a 220v outlet, you are there for at least a 3.3kW charge.
I will say that 130kW supercharging is amazing on the road (Tesla). I typically think about charging when the miles left goes to 2 digits (<100), and I see the 100kW+ charge for only about 20 minutes. But that charger takes the Tesla from less than 100 to over 200 miles in that 20 minutes, which is rocket fast compared to other cars, and makes highway travel amazing.
1
-
1
-
1
-
1
-
1
-
1
-
Yea, if we recognize 1000 year old claims to land, then everyone in the world will have to move. This is not the strongest Israeli argument, and it is the worst excuse for the settlements in the West bank/Judea. The area was a territory of the Ottomans, originally taken by force, and it included modern day Jordan. The correct answer is that a lot of territory got redistributed in the middle east, that there was never a state of Palestine, the Palestinians aren't demanding Jordan "back", and more than half of the Israelis were evicted themselves by force from the middle east. The traditional answer in the past, and that carried out in Jordan, was mass evictions and executions of the native peoples. Obviously the better solution is to come to an agreement.
I believe this will happen when the Palestinians lose all other supporting countries that encourage violence over coming to an arrangement. I think Hamas also believes this, which is why they launched an attack on the eve of an accord with Saudi Arabia. That leaves Iran, which has many enemies, but none more powerful than its own people, who will prevail in the end.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I think you need to be careful here. I worked for a charger company. There are two main elements of the Tesla plug that are desirable. The first is that it is a better engineering design. The second, and perhaps more important element is that Tesla is a plug and go standard. Ie, you roll up, plug in, and get charged to charge (pun intended) automatically. No screen, no card swipe, etc.
The first part is not going to change with other makers adopting the plug. However, that second part could have happened at ANY TIME for the J1772 plug. Its true the J1772 plug was and is braindamaged in that it didn't have a digital communications port built in to the connector, and thus was inherntly incapable of "plug to charge" automatic debt payment. However, that got fixed with PLC or "power line communication" a means of high bandwidth communication with the car.
The problem is that Tesla does and will own that automatic payment network. I expect the other car makers to sign on to that network *for a while*. However, for rthe NACS standard to go forward, eventually other networks besides Tesla will have to start up. This happens naturally as, say, chargepoint adapts the NACS standard.
The point is, the networks f**ked up the current payment system, even though J1772 is well capable of an automatic payment system, and they *will f**k up the NACS system as well if they are not held accountable.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I put my opensource contributions in my resume at the top. Nothing. I questioned employers about it, including the ones who hired me. They said they never looked at it. Complete waste of time.
I had one employer ask about my contributions to the Linux kernel, and when I said I had, he wanted me to show upstream pushes with my name on them. I told him that those contributions bore the company name I worked for, not MINE. Again, complete waste of time. So I am supposed to get the people I write code for to agree to put my name on everything. Ok. Still thinking about that, don't work on kernel drivers at the moment, so moot point.
I guess the point here is if you are that much of a social climber that you run about trying to get credit for everything, consider a career in management, not programming.
Another issue is that I program for FUN on my own time, on projects that are valuable to ME. Even if employers look at my code, they are not going to see an open source "I did an embedded program for an ARM bluetooth chip", IE, I don't have work examples online and it would bore me to make one. Even if I did it would be an artificial example that didn't really get implemented anywhere.
I had exactly ONE example program like that, a disk drive diagnostic, an extensive one. I have programmed one of these several times, and I figured that if I did it on my own time, I could carry it from job to job instead of rewriting it every time. I put that up as a work example. I highly suspect that even if employers look at it, they go "ewwww, disk drives", and don't bother to look at it.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I like your videos, but this shows the basic fallacy of the left's obsession with climate change. I see plenty of people yelling about it, but then if you ask, you find they have done nothing personally about it. They don't own electric cars, they don't have solar (despite having a house). Its just way easier to blame corporations isn't it?
Getting an electric car is a major step to eliminating the direct burning of the dirtiest fossil fuels, namely gas and diesel. I have had electric cars (2) since 2013, and all of the excuses I hear about why people don't want one are basically nonsense repeated from biased press. No you don't have to spend hours waiting at a charger, yes you can use them for long distance travel, etc. Even if you live in one of the very few states that still rely on coal power generation, it is more efficient and has less emissions than gasoline cars, and coal power is being phased out in general across the USA anyways.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Couple of comments:
1. The west passed China technologically, but a lot of the basic technology came from China originally. Gunpowder, rockets, mining, quite a few things.
2. "stakeholder economics". Anyone can become a "stakeholder" in companies by purchasing their stock. And in fact, for tech companies especially, companies encourage employees to purchase stock and give stock awards. Beyond that it is up to you. You can purchase Chevron stock, and you can even (with only one share!) come to stockholder meetings and advocate for what you want. They literally have to listen, and you can (and many have) get the other "stakeholders", ie., stockholders to rally behind you and do things like vote the management out. Beyond that you are talking about giving control to people who haven't done a thing to earn it. Are you a stakeholder because you buy Chevron gasoline? The idea of "stakeholder" economics as it is proposed really comes down to more government regulation of companies, that is, government flexing their power to force companies to give "stakes" to people who had nothing to do with the success of the company.
3. The USA has been declining in emissions while China is growing and yet we want to put regulations on the USA but let China go their own way?
4. Companies in the USA are responsive to both their shareholders and to the public. They respond to pressure. That does not happen in China, where companies are only beholden to the government.
1
-
1
-
1
-
1
-
"The best thing about a Mac is it is Unix based"", and they have managed to drain pretty much all the advantages out of being Unix based. It uses different software from the GUI up, its a struggle to get Unix compatible software loaded on it, the list goes on and on. My rule is if you like Windows, use Windows, if you like Mac, use mac, and if you like Unix/Linux, use Linux or a BSD. Trying to emulate another OS on your device is a losing proposition.
I use Linux on my main machine. I use Windows on my laptop because Linux sucks that battery down like a tick on steroids, and my wife uses a Linux on main computer and a Mac laptop. Never heard her complain, mainly because she only cares about the web. If I need to develop on say, and android device with IDE, I use Linux. I see plenty of developers trying to force a Mac to do that, its a complete waste of energy. If I have an IDE that works on windows, I'll try it on Linux, but if it dosen't work, I give it up. That's why all my machines dual boot at least.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@CrasusC You can always tell when internet posters don't have solid arguments, because they resort to petty insults.
Those are both examples of defenders, the UK defending its homeland, and The germans defending theirs. The point, which you ignored, is if China is actively bombing Taiwan, targeting Beijing or Shanghai with conventional missiles, even if it has no material effect on the war, would involve the Chinese population more intimately in the war and bring it home. I don't believe the general population of China is in fact for a war with Taiwan. They regard Taiwan as fellow chinese, and believe that Taiwan should give up and submit to China peacefully. I don't think reciprocating with Chinese bombings would change that. More likely it would break the idea that the "war is over there", free of costs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Agree everything, but that EVs don't tire wear. My Tesla M3 eats tires because of the weight of the vehicle. I also think that we are finally seeing the lie of the S curve. It is not happening. Its going to be a long slog to full EV adoption.
As far as charging, my experience the other day was illustrative. I forgot to charge my TM3, and was getting low charge alerts as a result. Here in Silicon valley there is no lack of charging spots, and I got off the freeway and turned left into a shopping center with about 20 Tesla chargers. The left turn was an intersection with 4 gas stations, one on each corner. Each one of those could have 20 charging spots if they used all the land, and at 20 minutes for quick charge that would about cover the need considering that a normal gas station has about 16 possible refueling locations (4 islands, two cars per side). The gas stations are always saying they make most of their money as conivence stores in any case, and having people stay longer you would think would be an added perk of serving EVs. Most gas stations could add chargers just by converting their parking spaces at 5 to 10 spaces, which are usually unoccupied because only the gas station employees are there all day.
As it is, EVs are passing gas stations, going to the back to charge at dark shopping center parking lots and trekking to CVS pharmacies. Its like they are sleeping though their own funeral.
1
-
1
-
1
-
@boembab9056 If you scale the screen, you are letting the OS/Presentation system draw things bigger for you. If you do it in the application, the application is doing the "scaling", QED. So lets dive into that. In an ideal world, the presentation system is taking all of your calls, lines, drawings, pictures, and scaling them intelligently. In that same world, the Easter bunny is flying out of your butt. All the system can really do is interpolate pixels.
Lets take a hypothetical for all of you hypothetical people. They come out with a 4m display, meaning 4 megapixels, not 4 kilopixels. You scale %100, meaning "no scaling". All of the stupid apps look like little dots on the screen because they are compressed to shit. Now we scale some 1000 times to get it all back. If the scaler does not consider what was drawn, but just pixels, its going to look terrible as scaled, just as if you blow up a photo on screen far in excess of its resolution. Now the apps that are NOT stupid, but actually drew themselves correctly, are going to look fine, perhaps just that much smoother because they took advantage of the extra resolution.
Now lets go one more. I know this is boring, drink coffee, pay attention. Drawing characters at small point sizes is a problem right? People worked out all kinds of systems like "hints" to try and make fonts look good at small point sizes like 5-8 points. But you bought that 4k monitor and that 4k card, and THEN you bought a fast CPU to push all of that data around. Guess what? That 5 point problem you had is gone. Just gone. There is sufficient resolution to display fonts on screen down to the point where you can barely see them.
Now ask yourself. How does a scaling algorithm do that unless it DRAWS the characters at that resolution? Keep in mind that programmers spent decades on true type formats and computed character drawing to match mathematical curves to pixels. Is an interpolated scaler going to do that? No, no it is not.
Peace out.
1
-
@boembab9056 Look I know you are a smart guy, but think about what you are saying. If the application knew how to take care of its own scaling, the OS does not need to do anything, no scaling at all. The typical flow is:
1. If the application has never come up before (default), it takes the measure of the screen, then presents itself according to a rule of thumb, say 1/4 the size of the screen.
2. Size the fonts according to the onscreen DPI. Ie, it you have 12 point type, then choose an onscreen font accordingly. Points are 1/72 of an inch, so 12 point type is 0.16 of an inch in height ON SCREEN.
3. Set other dimensions accordingly. I personally use the point size to dimension everything else on screen, and I have found that works well.
4. If the application has executed previously, then just use the last window size. That is a reasonable expectation for the user.
Do that, and no scaling is required. The app knows what to do. If you think about it, what scaling REALLY does is accommodate stupid applications that don't understand how to scale themselves properly.
I follow all of the rules above in my applications. I'll readily admit that I had to do some work to get to 4k displays. Mostly it was because I used standard (and it turns out arbitrary) measures to size items in the apps display. Also when moving to 4k, I implemented a standard pair of keys to let the user adjust the size of the apps display (ctl-+ and ctl--, same as Chrome and most other apps).
This is the correct solution. Rescaling all applications because SOME programmers don't know what they are doing is not the right solution, and, indeed, it actually punishes the applications that did the right thing by messing with their scaling instead of letting them do it themselves.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@heyyou274 1. Because it conflates all program data into one place. This makes it more difficult to find and separate out. And there is little corresponding benefit to it.
2. Because if the registry gets destroyed, the system is dead. And windows used to destroy its own registry a lot. Admittedly it has got better at this.
3. Removing an application got way more difficult with the registry. Many or even most applications don't clean their registry entries when removed, and in fact some call this a "feature", because it supposedly maintains settings. This can cause interesting things like applications that have persistent problems because the registry entries don't go away, and have to be manually removed.
As a developer, I am a big advocate of "one tree, one application" methodology. That is, an application is contained in one tree under one directory entry, without change on installation. This means it is installed by simply copying the tree to disc, and can be removed by simply removing that tree. It does not use a complex installer process (such as flatpack or other). It does not spread itself all over the operating system. And all of its configuration files are contained in that tree. Eclipse is one such application, as are all of my applications.
Do I use the registry? In cases I am forced to. For example the path is in the registry, and I have to alter it. There are other examples. But I prefer to keep as much data outside the registry as possible.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Thanks for finally explaining why the Webb telescope needs a sunshade even though it is in the lagrange shadow.
ISS was a good deal in its day. I agree we are better off building a moon base now, which is permanent. To be really cost effective, I believe we should start missions on the moon for the two most important resources there: water and underground tubes/caverns. Water will be a key resource for oxygen, fuel and of course water for habitation. The underground tubes, if found, will change everything, since we would be able to block them off and flood with atmosphere, dramatically both decreasing the cost of habitation, as well as dramatically increasing the volume available. They would also be proof from solar radiation events, something nearly impossible to achieve with above ground structures.
A moon base could change everything. Low cost of materials to orbit, the potential of finding materials on the moon, or even towing asteroids there and dropping them on the moon to mine iron and other materials, the ability to set up observatories on the moon, the list goes on and on.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Weaseldog2001 Well, yes, and no. Now we are in to test theory. If a test problem needs to be fixed by better cleanup and the end of the test, does that not imply that the initialization of the next test is a problem? It is clearly not able to bring the system to a stable state before the test.
We had a test unit "farm" at Arista, the idea was that there was a large pool of hardware test units, and the software could take a test run request, grab an available unit, run the tests, and release again. The biggest issue with it was that machines regularly went "offline", meaning they were stuck in an indeterminate state and could no longer be used. This was even after cycling power for the unit and rebooting. The problem was solved by taking some pretty heroic steps to restart the machine, as I recall even rewriting the firmware for the machine.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I disagree chargers are an issue. The press has made this the boogyman, but its made up. Most charging is done at home. I do know people who don't have home chargers, and they are able to get charging via work chargers or just stopping at a fast charger. The press keeps quoting the times to charge from zero, which is nonsense. Most charging does not start from a dead or nearly dead battery. With 250kw range fast charger, 15 to 20 minutes gets you several days of average driving time, about the same as a car. Also, quoting "350 miles of range" for a gas car is ridiculous. The have only been getting that kind of range for the last 10 years or so, and that only with a hybrid. My gas car does 200 miles range, and that is an average car. If that was inadequate, why is is suddenly unacceptable when people were fine with that kind of range before?
The nonsense about "you have to have 500 miles of range" before EVs are practical is just that. Nonsense. Giving EVs that kind of range will require you carry a very heavy battery around, adding to the cost and reducing the net efficiency of the car due to dragging all that weight. 100 miles of range would satisfy most peoples use cases, and 200 miles is certainly sufficient, and that is the range of most new EVs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
To me the prime example is a compiler, one of which I happen to be working on at the moment. If your compiler does not work, you are seriously scr*wed. Everyone knows this. Thus compilers simply work, and are reliable most of the time. If they are not they die quickly, since nobody uses them.
Thus the question becomes, compilers are a large and complex codebase. If we can get those right, why can't other programs be proven correct as well? The answer is that compiler developers simply take it as a given that the testing code for the compiler will be %50 of the total work to develop the compiler as the other half, developing the main compiler code.
So this means that for most programs, its not worth it to spend that kind of effort to prove the program is accurate no? There in lies the paradox. A typical program takes %50 of the total development time or more in debugging. Even very optimistic programmers will admit to that. By that same logic, saying you want to write the program, then do the work to debug it into shape means you prefer to fix the program AFTER the fact than BEFORE the fact, which is the net argument against TDD.
In a word, you can pay now, or pay later.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@vk3fbab I was never for user space drivers, but I am not for kernel space drivers either. The x86 processors were designed with (IIRC) 4 privilege levels, but of course Intel famously f**ked it up, so more commonly its a binary kernel/user space. In any case, the permission model is dated in any case. The modern model is to wrap each driver in hardware access defined by page protection. This is conflated with "user space drivers" but it need not be. All of that is driven by a design error in interrupt handing by Intel, but of course now we are off topic[1].
Besides the rolling of all drivers into kernel space, a sickness that Windows shares, there is the fact that Linux depends on named linkages, essentially making driver installation driven by a run time linker. This in turn is driven by Linux/Linus's preference for keeping driver models fluid. Why establish a vector model if we aren't going to also keep the driver model stable?
In any case, I would not preach for the so called "user space driver" model. I have seen that implemented by so many corporations in a messed up manner that I am not talking about it anymore (polling instead of interrupts, to mention the biggest issue).
[1] could have an endless thread on that, but note that changing the paging tree root register has nothing to do with the permission level.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Actually, a serious proposal. Orbit a sheet. Like a thin, mylar sheet, perhaps a mile square/round, 90 degrees to its orbit. Calculate the orbit so that it does not cross the path of any active satellites during its lifetime. It would get impacted by debris, large and small, that presumably would puncture it easily. However, these hits would, even though the mylar is thin, a lot of energy would be generated, which means the orbital path of the debris would be altered, presumably slowed. And thus make them deorbit much faster than if left alone. The sheet could be deployed by one of those pizza box size satellites, then after a year or so, the same satellite, attached to the sheet say by a corner, fires and deorbits both itself and the sheet. The satellite does not have to do anything but drag itself back into the atmosphere, where it burns up (just like the debris it targets).
If you make the sheet circular, the satellite can be attached in the middle and deploy the sheet just by spinning and then release the sheet, which deploys from centrifugal force. Then the small satellite stays attached at the center until the time comes to deorbit the sheet.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The issue I have with IDEs is they are made to difficult to customize. I don't like templates or automatic formatting, it feels too much like I am fighting with another person for control of the editor. Yet in Eclipse, for example, its a huge job to turn these features OFF, and there are some aspects of autoformatting that simply cannot be turned off at all (the subject of many a stackoverflow post). The other issue is that IDEs don't understand you may be looking for an IDE as an alternative to an editor, since it is difficult to, well, just edit a file. Often the IDE requires that you register files in a project. Give me an IDE that I can edit a system file with (for example) and that is a generally useful tool. What's wrong with "ide file"? Programs that work in familiar ways as a base encourage use. Otherwise it is like "our IDE is so great, you have to take a course to use it". Finally, the major advantage of vi/vim is that you can use it ANYWHERE, including a ssh connection without the hassle of an Xwindows connection. What's wrong with giving a text only option for an IDE that can be used the same way?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
So in response to "Tesla should join the existing standards" IE., j1772, no, sorry there is NOT in fact one standard outside of Tesla. For DC high power charging there are two in common use, CCS and Chademo, which with Tesla makes THREE different DC fast charge standards. And Tesla is the ONLY standard that integrated all forms of charging, slow, medium and fast, on a single connector.
Thus, the better standard does not have to adapt to the crappy standard (the betamax/VHS is another story). And Tesla fast charging (at 100kW and better) is already higher level than the vast majority of DC fast chargers at 50kW.
Should there be a single standard? Of course. But there is plenty of precedence that introducing standards on early, fast moving technology does not end well. Look at all of the iterations standard wall plugs went through, from light socket adapters, to blade plugs, to the modern three prong grounded (and in the USA it is still quite unsafe!).
And keep in mind that I make my living off J1772 chargers and I am saying this!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
All of these analyses ignore one basic fact: Tesla is just a better overall car. Its about 5 to 10 years ahead of other EVs, in terms of cockpit automation, performance, charging rate and charging infrastructure, and downloadable updates. What are the USA car ads full of right now? Auto park and auto lane keeping, features Tesla has had for years. In fact it seems like other car makers don't even GET Tesla. I have a bolt and a tesla, and the bolt, a very advanced car by USA standards, does not touch the Tesla for automation. The display is klunky and can't decide if screen touches or buttons and knobs is the interface. The display CRASHES, going to software quality issues. The range is just ok, and gets ruined by the fact that charging stations are at 1/5 the speed of Tesla stations (250kw vs 50 kw). Plus the charging network consists of having a stack of different charging network cards, and putting up with broken charging stations.
1
-
Add:
1. Reverse mortgages.
2. Medicare advantage plans.
3. Home title protection.
4. Ink jet printers (most of the cost is cartridges, and they self destruct if you don't use them regularly).
I got a lot out of high school, but it was because I took three hours of vocational classes daily, electronics, automotive, metal shop and print shop. I did that because I knew well that I was never going to be able to swing college and because the other classes like math and English bored me. I was warned several times that I would not graduate, and indeed I didn't, but got a GED later after work at an electronics job.The math requirement was covered by a test -- turns out I didn't need a class to be good at it, and the English class I took at night school, mostly because I ditched English class since it put me to sleep. True story, I kinda liked math but kept flunking it, so I would take it over again. One day I decided that I wasn't really bad at math and could pass the test if I actually studied for a change. I got an A+ on that test and was kicked out of the class for cheating :-).
I don't miss high school.
Ok, final joke. I married an English teacher. Ya, funny I know. Thus I almost spent more time at high school helping her with her teaching than I spent actually GOING to high school, which I ditched a lot. This was during the 2000s. I noticed that the high schools were selling off shop equipment and scaling back their vocational schools. Metal shop and auto shop are dirty jobs don'tcha know. Is it any wonder the guy working on your car is from Vietnam?
1
-
1
-
1
-
1
-
1
-
1
-
A bit overly simplistic. CP/M had serious technical shortcomings at the time of the introduction of the IBM-PC. CP/M had been around for several years by then on the 8080, then Z80 CPU (time did not begin with the IBM-PC). Much of this was due to the fact that CP/M, and the applications it ran, had to be stuffed into very little memory, 64kb (a tiny fraction of today's computers). CP/M did get translated into the 8086/88, but was still seriously behind technically. It required each application to keep its own file information, had no concept of where the end of a file was, and had limited support for applications greater than 64kb even on the 640kb IBM-PC. QDOS was a clean sheet compatible of CP/M, but didn't rip off Gary's code. Regardless, Microsoft rapidly improved the system far beyond its original implementation, with all of the limitations of CP/M addressed.
Digital research actually did come back with GEM, a graphical environment that ran lighter and faster than Windows, and that is another story.
The marketplace beat Gary. Nothing more.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
HDDs are an inferior form of storage compared to SSDs (how nice that the world can be described with TLAs). Thus HDDs are going to live or die based on being a form of backup. We saw this same dynamic happen before. Tape drives and optical drives, as a backup media, died out because arrays of HDDs were cheaper. HDDs certainly have a price advantage over SSDs, but that price advantage is eroding as SSDs become cheaper in relative terms. A quick dive in Amazon shows the price advantage at about 5 to 1, HDDs over SSDs, in the same format (SATA). M.2, the rising defacto standard for SSDs (M.2 modules have a significant speed advantage over SATA, which never accounted for the great difference in speed between HDDs and SSDs), have a price premium, but that is eroding rapidly, for the simple reason that there is no fundamental reason for such an advantage of M.2 over SATA. On the contrary, M.2 has less material than SATA and so holds the long term price advantage. SATA drives need a metal case.
The upshot is that HDDs hold a clear advantage, and reason for existence, at the 10 to 1 price level. At 5 to 1 we see that the sales curve for HDDs is trending down. When the price advantage falls below 2 to 1, its time to get out of the pool. The HDD industry will die.
1
-
1
-
1
-
1
-
1
-
Companies that have practices that you think need changes to solve programmer shortages, like hiring checklist programmers, unwilling to train, having age limits, etc, these companies aren't stupid. Thus I think it is reasonable to assume the "programmer shortage" is overblown. The Wall St. Journal has run some good articles on why the "open programmer positions" is a mostly fictional statistic, like positions advertised with no intention to fill, positions that were already assigned to a green card worker but required to advertise by the conditions of HB1 visas, and on and on. Any shortage should produce higher wages, and indeed, programmers are generally paid well. However the feeling with employers is that most software jobs can be divided up and given to new hires who are cheaper than one or two highly experienced programmers.
I have been in the industry for 40 years, and have lived through several "programmer shortage" waves. The biggest ones, like in the early 1990's, caused a wave of programmer graduates who mostly ended up taking jobs outside of the industry based on my own personal experience. I don't think this has really changed. Here in silicon valley I take Ubers and have many times found out that the driver was a programmer who could not find a job.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
modern electronics makes things simpler and easier. You can be a luddite, or you can embrace it. I took 4 shop classes per day in high school, so much so they told me I wouldn't graduate. I didn't care because I knew a poor kid couldn't swing college, so my future was blue collar, and indeed, I didn't graduate. Fast forward to today. They started ripping "dirty" shop classes out of high schools, and doling out government money "so everyone would go to college". That's why some guy from Vietnam repairs your car. Now people write bitchy videos complaining about how cars are too advanced. They are easier to repair than ever. You just have to know how, and not be stuck in the last century. There is a big push to reduce the amount of wiring in a car. It makes sense. You only really need to route the can bus or whatever bus around the car, not individual wires to every light and sensor.
Nobody can repair the car? Bullcookies. CAN bus is easy to hack. Tesla keeps their car wiring a secret, but even they will have to open it up eventually, and in the meantime there are lots of hackers taking the car apart. Electrics are far simpler and don't break down as easily.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1. Reverse the rule that senators are popularly elected. They were representatives of the states in the original constitution. The states should have a right to representation at the federal level. The senators are accountable to the people indirectly though state's governments.
2. The parliament model is superior to the presidential model. The presidency has become a "king of the hill", and each of our binary parties considers themselves to have "won" if they get the presidency. The prime minister model was not designed, it evolved, and the prime minister is both more accountable to the people and to the house. he/she can be turned out on a dime, and that's how it should be. Parliaments generate fights, coalitions, compromises, etc. That's how government should work, not having one person step in and resolve it all. If we had a parliament, we would have more than 2 parties, as other countries do. The current structure of government encourages the model of two parties having all the power.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not to excuse the silicon manufacturers, but Silicon Valley, AKA San Jose, has had a number of contaminations over time. The first was mercury, which they used to mine in the hills above San Jose. It was used during the gold rush to process gold bearing ore. The hills are still quite contaminated with the stuff, and fishing is prohibited, leading to an odd abundance of fish there. Nothing saves your life like being a toxic fish.
More recently, it was discovered that the company making rockets, UTC, in the hills, had left large waste ponds of the makings of rocket fuel, and those waste ponds were leaking into the ground water for many years. This leads to thyroid disease.
I can't really complain. Starting in electronics at the tender age of 16, I have inhaled tons of triclor, solder fumes and rubbed up against a lot of lead. We used to carry it around in bars, and melt it in pots and soldering machines. But as you can see, it has had no effect on me. But then, as you can see, it has had no effect on me...
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@jonassattler4489 Yea, its an unfortunate fact that I am aware of. I am a pilot as well as a programmer. The use of C is a shockingly bad choice for life critical applications such as avionics. I disagree there are no alternatives. Java, which has been around for decades, is a fully protected language, and Pascal has been around for 50 years now. Fully protected.
Allocators are well debugged. The bugs that occur happen not because of the allocator (which is usually only a page or so of code) but because of incorrect use. Regardless, again, there are languages outside of that running segment violation of a language C that properly check allocations.
I use C in most of my work, have to make a living. You code in what your employer uses. But again, C is a terrible choice, and yes, there are alternatives.
I have lots of issues with NASA use of COTS (Customer off the shelf). When the mars probe locked up because of a priority inversion, my first reaction was "they use a priority based RTOS???". Priority based OSes have known issues that (to me) stem from use of an oversimplified model of tasking. Demand based systems (well covered in the literature) are better and actually properly model the way tasking works.
I'll put is succinctly: NASA chose their languages by popularity, not by fitness of purpose. The military went through the same thing, and they chose ADA because it had protection (long before JAVA and the protected language fad). Its simple. The military didn't want run nuclear missiles on C code. They kinda got out their on their own limb with ADA, but they made it work. ADA is still in use.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The war over color TV standards got repeated with the advent of HDTV. The FCC had already signed off on an analog, backwards compatible system when a small silicon company called general instrument showed that by using a digital carrier based system with mpeg, the amount of bandwidth needed by the (very wasteful) analog TV system could be reduced significantly, while at the same time dramatically increasing the reception reliability. GI had already done this for digital cable systems, so over the air systems had fallen behind. The FCC did another about face, and the broadcasters suddenly did as well. Cynics said that the true underlying cause was the broadcasters realization that the very same digital technology that could give an HDTV signal in the same 6mhz channel as analog TV could very well be used to compress existing TV into 1mhz or less, and result in broadcasters losing up to 5/6ths of their very valuable spectrum real estate if the FCC (and the public) woke up to this fact. Thus HDTV was born, and the broadcasters used the technology to split up into multiple channels anyway... but under their control.
The true result of all of the nonsense is that mpeg-2, and later mpeg-4, took over TV broadcasting by storm, rendering the actual method used to broadcast TV increasingly irrelevant. The broadcasters kept their spectrum allocations, but the number of over the air users decreases daily. And the FCC increasingly puts pressure on broadcasters to give up that real estate to other uses.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1