Comments by "Scott Franco" (@scottfranco1962) on "Asianometry"
channel.
-
496
-
My favorite application for mems is in aviation, since I am a recreational pilot. One of the first and most successful application of mems was accelerometers, which don't need openings in the package to work. Accelerometers can replace gyroscopes as well as enable inertial navigation, since they can be made to sense rotation as well as movement. With the advent of mems, avionics makers looked forward to replacing expensive and maintenance intensive mechanical gyroscopes with mems. A huge incentive was reliability: a gyroscope that fails can bring down an aircraft. The problem was accuracy. Mems accelerometers displayed drift that was worse than the best mechanical gyros. Previous inertial navigation systems used expensive laser gyros that worked by sending light pulses through a spool of fibre optical line and measuring the delay due to rotation.
Mems accelerometers didn't get much better, but they are sweeping all of the old mechanical systems into the trash can. So how did this problem get solved? Well, the original technology for GPS satellite location was rather slow, taking up to a minute to form a "fix". But with more powerful CPUs it got much faster. But GPS cannot replace gyros, no matter how fast it can calculate. But the faster calculation enabled something incredible: the GPS calculation could be used to calibrate the mems accelerometers. By carefully calculating the math, a combined GPS/multiaxis accelerometer package can accurately and reliably find a real time position and orientation in space. You can think of it this way: GPS provides position over long periods of time,, but very accurately, and mems accelerometers provide position and orientation over short periods of time, but not so accurately. Together they achieve what neither technology can do on its own.
The result has been a revolution in avionics. Now even small aircraft can have highly advanced "glass" panels, that give moving maps, a depiction of the aircraft attitude, and even a synthetic view of of the world outside the aircraft in conjunction with terrain data. It can even tell exactly which way the wind is blowing on the aircraft because this information falls out of the GPS/accelerometer calculation.
197
-
142
-
I think I told this story here before, but it bears repeating. In the early 80's windowed UV erasable proms were a thing. It was the time of Japan bashing, and accusals of their "dumping" on the market. We used a lot of EPROMs from different sources, and Toshiba was up and coming. We used mainly Intel EPROMs at the time. The state of the art back then was 4kb moving to 8kb (I know, quaint). Because of the window in the top of the EPROM you could see the chip. Most of us used this feature, when we had a bad EPROM, to get a little light show by plugging in the EPROM upside down and sealing the fate of the chip.
Anyways, Intel and Toshiba were in a price war, so the chips from each vendor were about equivalent in price. But side by side in the UV eraser tray what you saw was shocking. The Toshiba chips were about 1/4 the size of the Intel chips. Yes, those "inferior" Japanese were kicking our a**es. Intel struggled along for a while, and exited the market for EPROMs. The "anti-dumping" thing had exactly one result. We could go to Japan, to the akihabara market (from street vendors!) and get chips with twice or four times the capacity of USA chips for cheap and bring them back in our luggage.
130
-
122
-
101
-
100
-
81
-
60
-
52
-
49
-
41
-
36
-
33
-
26
-
24
-
24
-
24
-
Great video on one of my favorite subjects. I'd like to add a couple things. First of all (as the poster below said), this history skips a very important branch of IC history, the gate array, which FPGAs (which are a namesake, the Field Programmable Gate Array). Basically gate arrays were ICs that consisted of a matrix of transistors (often termed gates) without the interconnect layers. Since transistors then, and largely even today, are patterned into the silicon wafer itself, this divided the wafer processing into two separate divisions, the wafer patterning, and the deposition of aluminum (interconnect). In short, a customer could save quite a bit of money by just paying for the extra masks needed to deposit interconnects, and take stock wafers to make an intermediate type of chip between full custom and discrete electronics. It was far less expensive than full custom, but of course that was like saying that Kathmandu is not as high as Everest. Xilinx used to have ads showing a huge bundle of bills with the caption "does this remind you of gate array design? Perhaps if the bills were on fire".
Altera came along and disrupted the PLA/PAL market and knocked over the king o' them all the 22V10, which could be said to be the 7400 of the PAL market. They owned the medium scale programmable market for a few years until Xilinx came along. Eventually Altera fought back, but by then it was too late. However, Altera got the last word. The EDA software for both Xilinx and Altera began to resemble those "bills o' fire" from the original Xilinx ads, and Altera completely reversed its previous stance to small developers (which could be described as "if you ain't big, go hump a pig") and started giving away their EDA software. Xilinx had no choice but to follow suit, and the market opened up with a bang.
There have been many alternate technologies to the RAM cell tech used by Xilinx, each with an idea towards permanently or semipermanently programming the CLB cells so that an external loading prom was not required. Some are still around, but what was being replaced by all that work and new tech was serial EEPROM that was about 8 pins and approximately the cost of ant spit, so they never really knocked Xilinx off its tuffet. My favorite story about that was one maker here in the valley who was pushing "laser reprogrammability", where openings in the passivation of a sea of gates chip allowed a laser to burn interlinks and thus program the chip. It was liternally PGA, dropping the F for field. It came with lots of fanfare, and left with virtual silence. I later met a guy who worked there and asked him "what happened to the laser programmable IC tech?". He answered in one word: contamination. Vaporising aluminum and throwing the result outwards is not healthy for a chip.
After the first couple of revs of FPGA technology, the things started to get big enough that you could "float" (my term) major cells onto them, culminating with an actual (gasp) CPU. This changed everything. Now you could put most or all of the required circuitry on a single FPGA and the CPU to run the thing as well. This meant that software hackers (like myself) could get into the FPGA game. The only difference now is that even a fairly large scale 32 bit processor can be tucked into the corner of one.
In the olden days, when you wanted to simulate hardware for an upcoming ASIC, you employed a server farm running 24/7 hardware simulations, or even a special hardware simulation accellerator. Then somebody figured out that you could lash a "sea of FPGAs" together and load a big 'ole giant netlist into it and get the equivalent of a hardware simulation, but near the final speed of the ASIC. DINI and friends were born, large FPGA array boards that cost a couple of automobiles to buy. At this point Xilinx got wise to the game, I am sure. They were selling HUGE $1000 per chip FPGAs that could not have a real end consumer use.
21
-
18
-
17
-
15
-
14
-
13
-
@bunyu6237 I think others would supply better info than I on this subject, since I haven't been in the IC industry for decades, back in the late 1980's. At that time, the industry (reverse engineering) was moving from hand reversing to fully automated reversing. However, if you don't mind speculation, I would say there is no concrete reason why the reversing industry would not have kept up with newer geometries. The only real change would have been that its basically not possible to manually reverse these chips anymore. I personally worked on reversing a chip at about 4 generations beyond the Z80, which was not that much. At that time, blowing up a chip to the size of a ping-pong table was enough to allow you to see and reverse engineer individual transistors and connections.
Having said that, I have very mixed feelings about the entire process. I don't feel it is right to go about copying others designs. I was told at the time that the purpose was to ensure compatibility, but the company later changed their story.
On the plus side, it was an amazing way for me to get onboard the IC industry. There is nothing like reverse engineering a chip to give you a deep understanding of it.
However, I would say I think I would refuse to do it today, or at least try to steer towards another job.
For anyone who cares about why I have a relationship to any of this, I used to try and stay with equal parts of software and hardware. This was always a difficult proposition, and it became easier and more rewarding financially to stay on the software side only, which is that I do today. However, my brush with the IC industry made a huge impression on me, and still shapes a lot of what I do. For example, a lot of my work deals with SOCs, and I am part of a subset of software developers who understand SOC software design.
12
-
11
-
10
-
10
-
9
-
9
-
Just one small addition: When Intel pushed onboard graphics, where the graphics memory was part of the main memory of the CPU, it was thought that the video solution would actually be faster, since the CPU would have direct access to the frame buffer, as well as having all of the resources there to access it (cache, DMA, memory management, etc). The reason they lost that advantage in the long run was the dual advantages of VRAM or dual ported video ram, a ram that could both be read and written by the CPU at the same time as being serially read out to scan the video raster device, as well as the rise of the GPU, meaning that most of the low level video memory access was handled by a GPU on the video card that did the grunt work of drawing bits to the video ram. Thus Intel ran instead down the onboard video rabbit hole. Not only didn't they win the speed race with external video cards, but people began to notice that the onboard video solutions were sucking considerable CPU resources away from compute tasks. Thus the writing was on the wall. Later, gamers only knew the onboard video as that thing they had to flip a motherboard switch to disable when putting a graphics card in, and nowadays not even that. Its automatic.
8
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
6
-
What people miss about the RISC revolution is that in the 1980s with Intel's 8086 and similar products, the increasingly complex CPUs of the day were using a technique called "microcoding", or a lower level instruction set inside the CPU to run instruction decoding, etc. It was assumed that the technique, inherited from mini and mainframe computers, would be the standard going forward, since companies like intel were increasing the number of instructions at a clip. RISC introduced the idea that if the instruction set were simplified, CPU designers could return to pure hardware designs, no microcode, and use that to retire most or all instructions in a single clock cycle. In short, what happened is the titanic turned on a dime: Intel dropped microcode like a hot rock and created pure hardware CPUs to show that any problem could be solved by throwing enough engineers at it. They did it by translating the CISC x86 instructions to an internal RISC form and deeply parallelizing the instruction flow, the so called "superscalar" revolution. In so doing they gave x86 new life for decades.
I worked for SUN for a short time in the CPU division when they were going all in on multicore. The company was already almost in freefall. The Sparc design was flawed and the designers knew it. CEO Johnathan faced questioning at company meetings when he showed charts with Java "sales" presented as if it were a profit center (instead of given away for free). I came back to SUN again on contract after the Oracle merger. They had the same offices and the little Java mascots on their desks. It was probably telling that after my manager invited me to apply for a permanent position, I couldn't get it though their online hiring system, which was incredibly buggy, and then they went into a hiring freeze so it was irrelevant.
I should also mention that not all companies did chip design in that era with SUN workstations. At Zilog we used racks full of MicroVaxes and Xwindow graphics terminals. I still have fond memories of laying out CPUs and chainsmoking in the late 1980s until midnight.
5
-
@allentchang Hummm.... back in 1987 it was LSI workstations, if I recall. I don't know the operating system, but I believe they were Tectronix graphics terminals (Zilog). They were not fast, but very high resolution for the day. In 1993 it was Apollo workstations (Seagate), which were running Mentor. It certainly ran a Unix variant, but it was an unusual one. Into the new century, its all been Verilog using Xilinx software (various startups), running on Windows (does Xilinx even run on Linux/Unix?). Our fabs also tell the story: last century it was custom fab (Zilog), then AT&T fab (Seagate), then after that probably TSMC, I don't recall.
Afternote: Actually I do recall, at Zilog the Tek terminals were driven by racks and racks of LSI-11's, a PDP-11 that fit in a single or double RU. I remember because we had a big serial port mux that would allow you to get a connection to any of the machines. I used to write scripts that would start jobs on multiple machines overnight, which was the only way to get reasonable simulations of chips. I believe they were running Unix. Our chip simulations were done on a custom gate level simulator that I learned a lot from, since it would simulate things like domino logic.
And yes, I am old.
5
-
5
-
@D R That's a great question. In fact we could just use the altitude from the GPS system. Right now you dial in the barometric "base pressure", or pressure at sea level. This is used to calibrate the altimeter so that it delivers accurate results, I believe it is within 100 feet of accuracy (other sources say the FAA allows 75 feet of accuracy). Its a big big deal. A few feet could mean if you hit a building or pass over it. Thus when you fly, you are always getting pressure updates from the controller, because you are going to need updates that are as close as possible to the pressure in your area.
So why not use the GPS altitude, which is more accurate?
1. Not everyone has a GPS.
2. Even fewer have built in GPS (in the panel of the aircraft).
3. A large number of aircraft don't re calibrate their altimeters at all.
4. New vs. old. Aircraft have been around for a long time. GPS not so much.
If you know a bit about aircraft, you also know that number 3 there lobbed a nuclear bomb into this conversation. Don't worry, we will get there. First, there is GPS and there is GPS, implied by 1 and 2. Most GPS units in use are portable (in light aircraft). Long ago the FAA mandated a system based on transponders called "mode-C" that couples a barometric altimeter into the transponder. OK, now we are going into the twistly road bits. That altimeter is NOT COMPENSATED FOR BASE PRESSURE. In general the pilot does not read it, the controller does (ok most modern transponders do read it out, mine does, but an uncompensated altitude is basically useless to the pilot). The controller (generally) knows where you are, and thus knows what the compensating pressure is (no, he/she does not do the math, the system does it for them).
Note that GPS had nothing to do with that mode C discussion. So for the first part of this, for a GPS to be used for altitude, the pilot would have to go back to constantly reporting his/her altitude to the controller. UNLESS!
You could have a mode S transponder, or a more modern UAT transceiver. Then, your onboard GPS automatically transmits the altitude, and the position, and the speed and direction of the aircraft.
Now we are into equipage. Note "onboard GPS". That means built into the aircraft. Most GPS on light aircraft are handheld, which are a fraction of the cost of built in avionics. Please lets not get into why that is, its about approved combinations of equipment in aircraft, calibration, and other issues. The mere mention of it can cause fistfights in certain circles.
Ok, now lets get into number 3. If you are flying over, say, 14,000 feet, its safe to say you are not in danger of hitting any mountains, or buildings, or towers. Just other aircraft. So you don't care about pressure compensation. So the rules provide that if you are over 18,000 feet, you reach down and dial the "standard pressure" of 29.92 inches of mercury, which the FAA has decreed is "standard pressure" (the FAA also has things like standard temperature, standard tree sizes, etc. fun outfit). So what does that mean? Say you are TWA flight 1, and TWA flight 2 is headed the opposite direction, same altitude. Both read 18,000 feet. Are they really at 18,000 feet? No, but it doesn't matter. If they are going to collide, they are in the same area, and thus the same pressure. Meaning that their errors cancel. It doesn't matter that they are really at 19,123 feet, they both read the same. Thus climbing to 19,000 (by the altimeter) means they will be separated by 1,000 feet.
So the short answer is the final one. The barometric system is pretty much woven into the present way aircraft work. It may change, but it is going to take a long time. Like not in my lifetime.
5
-
@DumbledoreMcCracken I'd love to make videos, but I have so little free time. Allen was a true character. The company (Seagate) was quite big when I joined, but Allen still found the time to meet with small groups of us. There were a lot of stories circulating... that Allen had a meeting and fired anyone who showed up late because he was tired of nobody taking the meeting times seriously, stuff like that. He is famous for (true story) telling a reporter who asked him "how do you deal with difficult engineers".. his answer "I fire them!". My best story about him was our sailing club. I was invited to join the Seagate sailing club. They had something like a 35 foot Catalina sailboat for company use, totally free. We ended up sailing that every Wednesday in the regular race at Santa Cruz Harbor. It was owned by Allen. On one of those trips, after much beer, the story of the Segate sailboat come out.
Allen didn't sail or even like sailboats. He was a power boater and had a large yacht out of Monterrey harbor. He rented a slip in Santa Cruz, literally on the day the harbor opened, and rented there since. The harbor was divided in two by a large automobile bridge that was low and didn't raise. The clearance was such that only power boats could get through, not sailboats (unless they had special gear to lower the mast). That divided the harbor into the front harbor and back harbor.
As more and more boats wanted space in the harbor, and the waiting list grew to decades, the harbor office came up with a plan to manage the space, which was "all power boats to the back, sailboats to the front", of course with an exception for working (fishing) boats. They called Allen and told him to move. I can well imagine that his answer was unprintable.
Time went on, and their attempts to move Allen ended up in court. Allen felt his position as a first renter exempted him. The harbor actually got a law passed in the city to require sailboats to move to the back, which (of course) Allen termed the "Allen shugart rule".
Sooooo.... comes the day the law goes into effect. The harbormaster calls Allen: "will you move your boat". Allen replies: "look outside". Sure enough, Allen moved his yacht to Monterrey and bought a used sailboat which was now in the slip. Since he had no use for it, the "Seagate sailing club" was born. It was not the end of it. The harbor passed a rule that the owners of boats had to show they were using their boats at least once a month. Since Allen could not sail, he got one of us to take him out on the boat, then he would parade past the Harbormaster's office and honk a horn and wave.
Of course Allen also did fun stuff like run his dog for president. In those days you either loved Allen or hated him, there was no in-between. I was in the former group, in case you could not tell.
I was actually one of the lucky ones. I saw the writing on the wall, that Segate would move most of engineering out of the USA, and I went into networking for Cisco at the time they were still throwing money at engineers. It was a good move. I ran into many an old buddy from Seagate escaping the sinking ship later. Living in the valley is always entertaining.
5
-
5
-
5
-
5
-
Its a good question (further research into hard drives). They are still doing some amazing things, advanced magnetic materials, layered recording, etc. However, the basis of the industry is electromechanical, which means it is inherently slower and more complex than SSDs. You can only move a mass (head arm) so fast.
The recent research in disk drives has gone mainly to increasing their density, and therefore reducing cost. Because this does nothing to help the speed disadvantage of HDDs, this trend will actually accelerate the demise of the HDD industry, because it accelerates the trend of HDDs towards being a backup medium only.
HDDs cannot get any simpler. They have two moving parts: the head and the disks, and both probably spin on air now (certainly true of heads, not sure about spindles). Because HDDs are more complex and take more manufacturing effort than SDDs, the cost advantage of HDDs is an illusion. The fall is near.
5
-
IC masking and screen printing: Well, I think more accurate to say that these techniques were well known from the manufacture of printed circuit boards, which were in full swing at the time of the first ICs, and from there you get back to printing, both screen printing and lithography. Also, resists were in use before ICs, used to perform etching on metal, rock and other surfaces, which is very much a thing today. In fact, etching glass with acid, still done today, is almost a direct line to ICs, since silicon dioxide is basically glass.
5