Comments by "Scott Franco" (@scottfranco1962) on "Asianometry"
channel.
-
496
-
My favorite application for mems is in aviation, since I am a recreational pilot. One of the first and most successful application of mems was accelerometers, which don't need openings in the package to work. Accelerometers can replace gyroscopes as well as enable inertial navigation, since they can be made to sense rotation as well as movement. With the advent of mems, avionics makers looked forward to replacing expensive and maintenance intensive mechanical gyroscopes with mems. A huge incentive was reliability: a gyroscope that fails can bring down an aircraft. The problem was accuracy. Mems accelerometers displayed drift that was worse than the best mechanical gyros. Previous inertial navigation systems used expensive laser gyros that worked by sending light pulses through a spool of fibre optical line and measuring the delay due to rotation.
Mems accelerometers didn't get much better, but they are sweeping all of the old mechanical systems into the trash can. So how did this problem get solved? Well, the original technology for GPS satellite location was rather slow, taking up to a minute to form a "fix". But with more powerful CPUs it got much faster. But GPS cannot replace gyros, no matter how fast it can calculate. But the faster calculation enabled something incredible: the GPS calculation could be used to calibrate the mems accelerometers. By carefully calculating the math, a combined GPS/multiaxis accelerometer package can accurately and reliably find a real time position and orientation in space. You can think of it this way: GPS provides position over long periods of time,, but very accurately, and mems accelerometers provide position and orientation over short periods of time, but not so accurately. Together they achieve what neither technology can do on its own.
The result has been a revolution in avionics. Now even small aircraft can have highly advanced "glass" panels, that give moving maps, a depiction of the aircraft attitude, and even a synthetic view of of the world outside the aircraft in conjunction with terrain data. It can even tell exactly which way the wind is blowing on the aircraft because this information falls out of the GPS/accelerometer calculation.
197
-
142
-
I think I told this story here before, but it bears repeating. In the early 80's windowed UV erasable proms were a thing. It was the time of Japan bashing, and accusals of their "dumping" on the market. We used a lot of EPROMs from different sources, and Toshiba was up and coming. We used mainly Intel EPROMs at the time. The state of the art back then was 4kb moving to 8kb (I know, quaint). Because of the window in the top of the EPROM you could see the chip. Most of us used this feature, when we had a bad EPROM, to get a little light show by plugging in the EPROM upside down and sealing the fate of the chip.
Anyways, Intel and Toshiba were in a price war, so the chips from each vendor were about equivalent in price. But side by side in the UV eraser tray what you saw was shocking. The Toshiba chips were about 1/4 the size of the Intel chips. Yes, those "inferior" Japanese were kicking our a**es. Intel struggled along for a while, and exited the market for EPROMs. The "anti-dumping" thing had exactly one result. We could go to Japan, to the akihabara market (from street vendors!) and get chips with twice or four times the capacity of USA chips for cheap and bring them back in our luggage.
130
-
122
-
101
-
100
-
81
-
60
-
52
-
49
-
41
-
36
-
33
-
26
-
24
-
24
-
24
-
Great video on one of my favorite subjects. I'd like to add a couple things. First of all (as the poster below said), this history skips a very important branch of IC history, the gate array, which FPGAs (which are a namesake, the Field Programmable Gate Array). Basically gate arrays were ICs that consisted of a matrix of transistors (often termed gates) without the interconnect layers. Since transistors then, and largely even today, are patterned into the silicon wafer itself, this divided the wafer processing into two separate divisions, the wafer patterning, and the deposition of aluminum (interconnect). In short, a customer could save quite a bit of money by just paying for the extra masks needed to deposit interconnects, and take stock wafers to make an intermediate type of chip between full custom and discrete electronics. It was far less expensive than full custom, but of course that was like saying that Kathmandu is not as high as Everest. Xilinx used to have ads showing a huge bundle of bills with the caption "does this remind you of gate array design? Perhaps if the bills were on fire".
Altera came along and disrupted the PLA/PAL market and knocked over the king o' them all the 22V10, which could be said to be the 7400 of the PAL market. They owned the medium scale programmable market for a few years until Xilinx came along. Eventually Altera fought back, but by then it was too late. However, Altera got the last word. The EDA software for both Xilinx and Altera began to resemble those "bills o' fire" from the original Xilinx ads, and Altera completely reversed its previous stance to small developers (which could be described as "if you ain't big, go hump a pig") and started giving away their EDA software. Xilinx had no choice but to follow suit, and the market opened up with a bang.
There have been many alternate technologies to the RAM cell tech used by Xilinx, each with an idea towards permanently or semipermanently programming the CLB cells so that an external loading prom was not required. Some are still around, but what was being replaced by all that work and new tech was serial EEPROM that was about 8 pins and approximately the cost of ant spit, so they never really knocked Xilinx off its tuffet. My favorite story about that was one maker here in the valley who was pushing "laser reprogrammability", where openings in the passivation of a sea of gates chip allowed a laser to burn interlinks and thus program the chip. It was liternally PGA, dropping the F for field. It came with lots of fanfare, and left with virtual silence. I later met a guy who worked there and asked him "what happened to the laser programmable IC tech?". He answered in one word: contamination. Vaporising aluminum and throwing the result outwards is not healthy for a chip.
After the first couple of revs of FPGA technology, the things started to get big enough that you could "float" (my term) major cells onto them, culminating with an actual (gasp) CPU. This changed everything. Now you could put most or all of the required circuitry on a single FPGA and the CPU to run the thing as well. This meant that software hackers (like myself) could get into the FPGA game. The only difference now is that even a fairly large scale 32 bit processor can be tucked into the corner of one.
In the olden days, when you wanted to simulate hardware for an upcoming ASIC, you employed a server farm running 24/7 hardware simulations, or even a special hardware simulation accellerator. Then somebody figured out that you could lash a "sea of FPGAs" together and load a big 'ole giant netlist into it and get the equivalent of a hardware simulation, but near the final speed of the ASIC. DINI and friends were born, large FPGA array boards that cost a couple of automobiles to buy. At this point Xilinx got wise to the game, I am sure. They were selling HUGE $1000 per chip FPGAs that could not have a real end consumer use.
21
-
18
-
17
-
15
-
14
-
13
-
@bunyu6237 I think others would supply better info than I on this subject, since I haven't been in the IC industry for decades, back in the late 1980's. At that time, the industry (reverse engineering) was moving from hand reversing to fully automated reversing. However, if you don't mind speculation, I would say there is no concrete reason why the reversing industry would not have kept up with newer geometries. The only real change would have been that its basically not possible to manually reverse these chips anymore. I personally worked on reversing a chip at about 4 generations beyond the Z80, which was not that much. At that time, blowing up a chip to the size of a ping-pong table was enough to allow you to see and reverse engineer individual transistors and connections.
Having said that, I have very mixed feelings about the entire process. I don't feel it is right to go about copying others designs. I was told at the time that the purpose was to ensure compatibility, but the company later changed their story.
On the plus side, it was an amazing way for me to get onboard the IC industry. There is nothing like reverse engineering a chip to give you a deep understanding of it.
However, I would say I think I would refuse to do it today, or at least try to steer towards another job.
For anyone who cares about why I have a relationship to any of this, I used to try and stay with equal parts of software and hardware. This was always a difficult proposition, and it became easier and more rewarding financially to stay on the software side only, which is that I do today. However, my brush with the IC industry made a huge impression on me, and still shapes a lot of what I do. For example, a lot of my work deals with SOCs, and I am part of a subset of software developers who understand SOC software design.
12
-
11
-
10
-
10
-
9
-
9
-
Just one small addition: When Intel pushed onboard graphics, where the graphics memory was part of the main memory of the CPU, it was thought that the video solution would actually be faster, since the CPU would have direct access to the frame buffer, as well as having all of the resources there to access it (cache, DMA, memory management, etc). The reason they lost that advantage in the long run was the dual advantages of VRAM or dual ported video ram, a ram that could both be read and written by the CPU at the same time as being serially read out to scan the video raster device, as well as the rise of the GPU, meaning that most of the low level video memory access was handled by a GPU on the video card that did the grunt work of drawing bits to the video ram. Thus Intel ran instead down the onboard video rabbit hole. Not only didn't they win the speed race with external video cards, but people began to notice that the onboard video solutions were sucking considerable CPU resources away from compute tasks. Thus the writing was on the wall. Later, gamers only knew the onboard video as that thing they had to flip a motherboard switch to disable when putting a graphics card in, and nowadays not even that. Its automatic.
8
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
What people miss about the RISC revolution is that in the 1980s with Intel's 8086 and similar products, the increasingly complex CPUs of the day were using a technique called "microcoding", or a lower level instruction set inside the CPU to run instruction decoding, etc. It was assumed that the technique, inherited from mini and mainframe computers, would be the standard going forward, since companies like intel were increasing the number of instructions at a clip. RISC introduced the idea that if the instruction set were simplified, CPU designers could return to pure hardware designs, no microcode, and use that to retire most or all instructions in a single clock cycle. In short, what happened is the titanic turned on a dime: Intel dropped microcode like a hot rock and created pure hardware CPUs to show that any problem could be solved by throwing enough engineers at it. They did it by translating the CISC x86 instructions to an internal RISC form and deeply parallelizing the instruction flow, the so called "superscalar" revolution. In so doing they gave x86 new life for decades.
I worked for SUN for a short time in the CPU division when they were going all in on multicore. The company was already almost in freefall. The Sparc design was flawed and the designers knew it. CEO Johnathan faced questioning at company meetings when he showed charts with Java "sales" presented as if it were a profit center (instead of given away for free). I came back to SUN again on contract after the Oracle merger. They had the same offices and the little Java mascots on their desks. It was probably telling that after my manager invited me to apply for a permanent position, I couldn't get it though their online hiring system, which was incredibly buggy, and then they went into a hiring freeze so it was irrelevant.
I should also mention that not all companies did chip design in that era with SUN workstations. At Zilog we used racks full of MicroVaxes and Xwindow graphics terminals. I still have fond memories of laying out CPUs and chainsmoking in the late 1980s until midnight.
5
-
@allentchang Hummm.... back in 1987 it was LSI workstations, if I recall. I don't know the operating system, but I believe they were Tectronix graphics terminals (Zilog). They were not fast, but very high resolution for the day. In 1993 it was Apollo workstations (Seagate), which were running Mentor. It certainly ran a Unix variant, but it was an unusual one. Into the new century, its all been Verilog using Xilinx software (various startups), running on Windows (does Xilinx even run on Linux/Unix?). Our fabs also tell the story: last century it was custom fab (Zilog), then AT&T fab (Seagate), then after that probably TSMC, I don't recall.
Afternote: Actually I do recall, at Zilog the Tek terminals were driven by racks and racks of LSI-11's, a PDP-11 that fit in a single or double RU. I remember because we had a big serial port mux that would allow you to get a connection to any of the machines. I used to write scripts that would start jobs on multiple machines overnight, which was the only way to get reasonable simulations of chips. I believe they were running Unix. Our chip simulations were done on a custom gate level simulator that I learned a lot from, since it would simulate things like domino logic.
And yes, I am old.
5
-
5
-
@D R That's a great question. In fact we could just use the altitude from the GPS system. Right now you dial in the barometric "base pressure", or pressure at sea level. This is used to calibrate the altimeter so that it delivers accurate results, I believe it is within 100 feet of accuracy (other sources say the FAA allows 75 feet of accuracy). Its a big big deal. A few feet could mean if you hit a building or pass over it. Thus when you fly, you are always getting pressure updates from the controller, because you are going to need updates that are as close as possible to the pressure in your area.
So why not use the GPS altitude, which is more accurate?
1. Not everyone has a GPS.
2. Even fewer have built in GPS (in the panel of the aircraft).
3. A large number of aircraft don't re calibrate their altimeters at all.
4. New vs. old. Aircraft have been around for a long time. GPS not so much.
If you know a bit about aircraft, you also know that number 3 there lobbed a nuclear bomb into this conversation. Don't worry, we will get there. First, there is GPS and there is GPS, implied by 1 and 2. Most GPS units in use are portable (in light aircraft). Long ago the FAA mandated a system based on transponders called "mode-C" that couples a barometric altimeter into the transponder. OK, now we are going into the twistly road bits. That altimeter is NOT COMPENSATED FOR BASE PRESSURE. In general the pilot does not read it, the controller does (ok most modern transponders do read it out, mine does, but an uncompensated altitude is basically useless to the pilot). The controller (generally) knows where you are, and thus knows what the compensating pressure is (no, he/she does not do the math, the system does it for them).
Note that GPS had nothing to do with that mode C discussion. So for the first part of this, for a GPS to be used for altitude, the pilot would have to go back to constantly reporting his/her altitude to the controller. UNLESS!
You could have a mode S transponder, or a more modern UAT transceiver. Then, your onboard GPS automatically transmits the altitude, and the position, and the speed and direction of the aircraft.
Now we are into equipage. Note "onboard GPS". That means built into the aircraft. Most GPS on light aircraft are handheld, which are a fraction of the cost of built in avionics. Please lets not get into why that is, its about approved combinations of equipment in aircraft, calibration, and other issues. The mere mention of it can cause fistfights in certain circles.
Ok, now lets get into number 3. If you are flying over, say, 14,000 feet, its safe to say you are not in danger of hitting any mountains, or buildings, or towers. Just other aircraft. So you don't care about pressure compensation. So the rules provide that if you are over 18,000 feet, you reach down and dial the "standard pressure" of 29.92 inches of mercury, which the FAA has decreed is "standard pressure" (the FAA also has things like standard temperature, standard tree sizes, etc. fun outfit). So what does that mean? Say you are TWA flight 1, and TWA flight 2 is headed the opposite direction, same altitude. Both read 18,000 feet. Are they really at 18,000 feet? No, but it doesn't matter. If they are going to collide, they are in the same area, and thus the same pressure. Meaning that their errors cancel. It doesn't matter that they are really at 19,123 feet, they both read the same. Thus climbing to 19,000 (by the altimeter) means they will be separated by 1,000 feet.
So the short answer is the final one. The barometric system is pretty much woven into the present way aircraft work. It may change, but it is going to take a long time. Like not in my lifetime.
5
-
@DumbledoreMcCracken I'd love to make videos, but I have so little free time. Allen was a true character. The company (Seagate) was quite big when I joined, but Allen still found the time to meet with small groups of us. There were a lot of stories circulating... that Allen had a meeting and fired anyone who showed up late because he was tired of nobody taking the meeting times seriously, stuff like that. He is famous for (true story) telling a reporter who asked him "how do you deal with difficult engineers".. his answer "I fire them!". My best story about him was our sailing club. I was invited to join the Seagate sailing club. They had something like a 35 foot Catalina sailboat for company use, totally free. We ended up sailing that every Wednesday in the regular race at Santa Cruz Harbor. It was owned by Allen. On one of those trips, after much beer, the story of the Segate sailboat come out.
Allen didn't sail or even like sailboats. He was a power boater and had a large yacht out of Monterrey harbor. He rented a slip in Santa Cruz, literally on the day the harbor opened, and rented there since. The harbor was divided in two by a large automobile bridge that was low and didn't raise. The clearance was such that only power boats could get through, not sailboats (unless they had special gear to lower the mast). That divided the harbor into the front harbor and back harbor.
As more and more boats wanted space in the harbor, and the waiting list grew to decades, the harbor office came up with a plan to manage the space, which was "all power boats to the back, sailboats to the front", of course with an exception for working (fishing) boats. They called Allen and told him to move. I can well imagine that his answer was unprintable.
Time went on, and their attempts to move Allen ended up in court. Allen felt his position as a first renter exempted him. The harbor actually got a law passed in the city to require sailboats to move to the back, which (of course) Allen termed the "Allen shugart rule".
Sooooo.... comes the day the law goes into effect. The harbormaster calls Allen: "will you move your boat". Allen replies: "look outside". Sure enough, Allen moved his yacht to Monterrey and bought a used sailboat which was now in the slip. Since he had no use for it, the "Seagate sailing club" was born. It was not the end of it. The harbor passed a rule that the owners of boats had to show they were using their boats at least once a month. Since Allen could not sail, he got one of us to take him out on the boat, then he would parade past the Harbormaster's office and honk a horn and wave.
Of course Allen also did fun stuff like run his dog for president. In those days you either loved Allen or hated him, there was no in-between. I was in the former group, in case you could not tell.
I was actually one of the lucky ones. I saw the writing on the wall, that Segate would move most of engineering out of the USA, and I went into networking for Cisco at the time they were still throwing money at engineers. It was a good move. I ran into many an old buddy from Seagate escaping the sinking ship later. Living in the valley is always entertaining.
5
-
5
-
5
-
5
-
Its a good question (further research into hard drives). They are still doing some amazing things, advanced magnetic materials, layered recording, etc. However, the basis of the industry is electromechanical, which means it is inherently slower and more complex than SSDs. You can only move a mass (head arm) so fast.
The recent research in disk drives has gone mainly to increasing their density, and therefore reducing cost. Because this does nothing to help the speed disadvantage of HDDs, this trend will actually accelerate the demise of the HDD industry, because it accelerates the trend of HDDs towards being a backup medium only.
HDDs cannot get any simpler. They have two moving parts: the head and the disks, and both probably spin on air now (certainly true of heads, not sure about spindles). Because HDDs are more complex and take more manufacturing effort than SDDs, the cost advantage of HDDs is an illusion. The fall is near.
5
-
IC masking and screen printing: Well, I think more accurate to say that these techniques were well known from the manufacture of printed circuit boards, which were in full swing at the time of the first ICs, and from there you get back to printing, both screen printing and lithography. Also, resists were in use before ICs, used to perform etching on metal, rock and other surfaces, which is very much a thing today. In fact, etching glass with acid, still done today, is almost a direct line to ICs, since silicon dioxide is basically glass.
5
-
4
-
4
-
@bunyu6237 A couple of reasons (better software than hardware). First of all, there is a larger group of people working on software than hardware, so the jobs are more plentiful and the demand greater. Second, hardware/software crossover people are considered odd birds, and when I used to do that I had people literally telling me to "pick a side", go one way or the other. I find it easier to get and do software projects, and the pay is better. I dabbled in Verilog long after I stopped being paid for hardware design, and I realized it would take a lot of work to get a foothold in good Verilog design with virtually no corresponding increase in salary, and more likely a decrease for a while during the time I gain credibility as a Verilog designer. The last time I was paid to design hardware it was still schematic entry (and yes, in case you haven't figured that out, I am indeed that old).
Of course, a lot of this is my personal situation. I am not sure any of the above would serve as career advice. I definitely consider my hardware background to be a career asset, since specialize low level software design (drivers, embedded, etc). Having said that, I keep up with hardware advances and have often dreamed of uniting my Verilog experience with software experience. That dream is unrealized.
4
-
Good job. There was a bit of conflation there with microcode (its firmware?). It would have helped to underline that it is entirely internal to the chip and operates the internals of the CPU. In any case, microcode was discarded with the Pentium series, KINDA. It actually lives on today in so called "slow path" instructions like block moves in the later cpus, which use microcode because nobody cares if they run super fast or not, since they are generally only used for backwards compatibility and got deprecated in 64 bit mode.
I await the second half of this! Things took off again with the AMD64 and the "multicore wars". Despite the mess, the entire outcome probably could have been predicted on sheer economic grounds, that is, the market settling into a #1 and #2 player with small also-rans. Today's desktop market, at least, remains in the hands of the x86 makers except for the odd story of Apple and the M series chips. Many have pronounced the end of the desktop, but it lives on. I have many or even most colleges who use Apple macs as their preferred development machines, but, as I write this, I am looking out at a sea of x86 desktop machines. Its rare to see a mac desktop, and in any case, at this moment even the ubiquitous Mac pro laptops the trendsetters love are still x86 based, although I assume that will change soon.
Me? Sorry, x86 everything, desktop and laptop(s). At last count I have 5 machines running around my house and office and 4 laptops. I keep buying Mac laptops and desktops, cause, you know, gotta keep up with things, but they grow obsolete faster than a warm banana. Yes, I had power PC Macs, and yes they ended up in the trash. And yes, I will probably buy Mac M2s at some point.
4
-
4
-
4
-
3
-
3
-
3
-
You have tapped into a classic boondoggle here. When I was with Cisco around 2000, the big "new wave" was about all optical switching, which was supposed to be faster than converting optical to electronic and back again, often with DMMs (Digital Micro Mirrors). Howed that work out? The startup world was littered with smoking holes of failed companies.
I think the bottom line is we know a lot about devices that operate on electrical signals, but not so much about devices that work on pure light. As in everyone knows what an electrical nand gate is, and optical nand gates are possible with optical signals, but good luck getting that to work, be integrated at high densities, and be efficient. Lets start with the basics. You can route signals easily in electronics, and the 10+ layers of interconnect on current ICs talk to this. What's a light conductor on an IC? Well, air, which should be free, but is far from it. You would have to couple in and out of the IC at many points, which is expensive in terms of real estate. You could conduct with glass, and that is a whole 'nuther level.
I'm not saying never. I'm just saying that with any breathless new wave of technology you have to look at history and see if that wave has not broken previously, or like every 5 years or so (cough.... AI).
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
@ttb1513 The last company I work at silicon design for, Seagate, was emblematic of the basic problems with ASICs. It was 1995 and they were new at ASIC design in house. And they weren't very good at it. It was random logic design before Verilog. An example of this was our timing verification. We had timing chains that were too long for the cycle time, and thus they simply alllowed for the fact that the signal would be sampled in the next clock cycle. Now if you were an ASIC designer back then, what I just said would have made you reach for the tums, if not a 911 call for cardiac arrest. Its an open invitation to metastability. And indeed, our AT&T fab guys were screaming at us to stop that. I got put in charge of hardware simulation for the design, and I have detailed this fiasco in these threads before, so won't go over it again.
The bottom line was that ASIC process vendors were loosing trust in their customers to perform verification. The answer was that they included test chains in the designs that would automatically verify the designs at the silicon level. It mean that the silicon manufactured design would be verified, that is, defects on the chip would be verified regardless of what the design did. My boss, who with the freedom of time I can now certify was an idiot, was ecstatic over this new service. It was a gonna fixa all o' de problems don't ya know? I pointed out to him, pointlessly I might add, that our design could be total cow shit and still pass these tests with flying colors. It was like talking to a wall.
In any case, the entire industry went that way. Designs are easier to verify now that the vast majority of designs are in Verilog. I moved on to software only, but I can happily say that there are some stunning software verification suites out there, and I am currently working on one, so here we are.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@AlexanderSylchuk Oh, you are begging for my favorite story. I interviewed with a company that made precision flow valves. These were mechanical nightmares of high precision that accurately measured things like gas flow in chemical processes. This is like half the chemical industry (did you know a lot of chemical processes use natural gas as their feed stock?). Anyways, what has that got to do with this poor programmer? Well, like most industries they were computerizing. They had a new product that used a "bang bang" valve run by a microprocessor. A bang bang valve is a short piston that is driven by a solenoid that when not energized, is retracted by a spring and opens a intake port and lets a small amount of gas into a chamber. then the solenoid energizes, pushes the piston up and the gas out another port. Each time the solenoid activates, a small amount of gas is moved along. Hence the "bang bang" part. If you want to find one in your house, look at your refrigerator. Its how the Freon compressor in it works.
Ok, well, that amount of gas is not very accurately measured no matter how carefully you machine the mechanism. But, it turns out to be "self accurate", that is, whatever the amount of gas IS that is moved, it is always the same. The company, which had got quite rich selling their precision valves, figured they could produce a much cheaper unit that used the bang bang valve. So they ginned it up, put a compensation table in it so the microprocessor could convert gas flows to bang bang counts, and voila! ici la produit! It worked. Time to present it to the CEO! The CEO asks the engineers "just how accurate is it?" Engineer says:
well... actually it is more accurate than our precision valves. And for far cheaper.
The story as told me didn't include just how many drinks the CEO needed that night.
So the CEO, realizing that he had seen the future, immediately set into motion a plan to obsolete their old, expensive units and make the newer, more accurate and cheaper computerized gas flow valves.
Ha ha, just kidding. He told the engineers to program the damm thing to be less accurate so that it wouldn't touch their existing business.
Now they didn't hire me. Actually long story, they gave me a personality test that started with something like "did you love your mother", I told them exactly where, in what direction, and how much force they could use to put their test and walked out.
I didn't follow up on what happened, mainly because I find gas flow mechanics to be slightly less interesting than processing tax returns. But I think if I went back there, I would have found a smoking hole where the company used to be.
And that is the (very much overly long) answer to your well meaning response.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
"A/C systems are more economical than D/C".... welllll in old tech that was true. Transformers were easy to make, power came out of the generator in A/C, and the most common use of the power in the early 1900s was motors that ran on A/C.
Now fast forward to today. Your power plug shrank. What happened? Well, copper is expensive, so anything that shrinks that is cool. Plus, power is lost in the transformer, which was what was in that heavy brick that used to power your laptop. In addition, generally you use D/C power in most of your house now. All the electronics, those LED lightbulbs, etc. Yep, all D/C.
Electronics came to the rescue. Turns out if you use high power, high (er) frequency (than A/C at 60/50 hz) you can do the same power conversion with way way (way) less copper or even no copper at all. Plus, it is way easier to perform high power conversion now, even at high voltages.
Thus things are changing, rapidly. There is a good chance that many or even most lighting systems will go DC as distributed on DC feeder lines. This is already true in some large offices and industrial concerns. This is because a lot of the power used in LED lighting is used in the conversion from A/C to D/C. Want to prove this to yourself? Go find a screw in LED lamp in your house. Feel the glass where the light comes out. Now feel the base (don't touch the metal). The base is hotter isn't it? That is where the A/C to D/C converter is.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
HDDs are an inferior form of storage compared to SSDs (how nice that the world can be described with TLAs). Thus HDDs are going to live or die based on being a form of backup. We saw this same dynamic happen before. Tape drives and optical drives, as a backup media, died out because arrays of HDDs were cheaper. HDDs certainly have a price advantage over SSDs, but that price advantage is eroding as SSDs become cheaper in relative terms. A quick dive in Amazon shows the price advantage at about 5 to 1, HDDs over SSDs, in the same format (SATA). M.2, the rising defacto standard for SSDs (M.2 modules have a significant speed advantage over SATA, which never accounted for the great difference in speed between HDDs and SSDs), have a price premium, but that is eroding rapidly, for the simple reason that there is no fundamental reason for such an advantage of M.2 over SATA. On the contrary, M.2 has less material than SATA and so holds the long term price advantage. SATA drives need a metal case.
The upshot is that HDDs hold a clear advantage, and reason for existence, at the 10 to 1 price level. At 5 to 1 we see that the sales curve for HDDs is trending down. When the price advantage falls below 2 to 1, its time to get out of the pool. The HDD industry will die.
1
-
1
-
1
-
1
-
1
-
Not to excuse the silicon manufacturers, but Silicon Valley, AKA San Jose, has had a number of contaminations over time. The first was mercury, which they used to mine in the hills above San Jose. It was used during the gold rush to process gold bearing ore. The hills are still quite contaminated with the stuff, and fishing is prohibited, leading to an odd abundance of fish there. Nothing saves your life like being a toxic fish.
More recently, it was discovered that the company making rockets, UTC, in the hills, had left large waste ponds of the makings of rocket fuel, and those waste ponds were leaking into the ground water for many years. This leads to thyroid disease.
I can't really complain. Starting in electronics at the tender age of 16, I have inhaled tons of triclor, solder fumes and rubbed up against a lot of lead. We used to carry it around in bars, and melt it in pots and soldering machines. But as you can see, it has had no effect on me. But then, as you can see, it has had no effect on me...
1
-
1
-
1
-
1