Comments by "Scott Franco" (@scottfranco1962) on "Asianometry" channel.

  1. 496
  2. My favorite application for mems is in aviation, since I am a recreational pilot. One of the first and most successful application of mems was accelerometers, which don't need openings in the package to work. Accelerometers can replace gyroscopes as well as enable inertial navigation, since they can be made to sense rotation as well as movement. With the advent of mems, avionics makers looked forward to replacing expensive and maintenance intensive mechanical gyroscopes with mems. A huge incentive was reliability: a gyroscope that fails can bring down an aircraft. The problem was accuracy. Mems accelerometers displayed drift that was worse than the best mechanical gyros. Previous inertial navigation systems used expensive laser gyros that worked by sending light pulses through a spool of fibre optical line and measuring the delay due to rotation. Mems accelerometers didn't get much better, but they are sweeping all of the old mechanical systems into the trash can. So how did this problem get solved? Well, the original technology for GPS satellite location was rather slow, taking up to a minute to form a "fix". But with more powerful CPUs it got much faster. But GPS cannot replace gyros, no matter how fast it can calculate. But the faster calculation enabled something incredible: the GPS calculation could be used to calibrate the mems accelerometers. By carefully calculating the math, a combined GPS/multiaxis accelerometer package can accurately and reliably find a real time position and orientation in space. You can think of it this way: GPS provides position over long periods of time,, but very accurately, and mems accelerometers provide position and orientation over short periods of time, but not so accurately. Together they achieve what neither technology can do on its own. The result has been a revolution in avionics. Now even small aircraft can have highly advanced "glass" panels, that give moving maps, a depiction of the aircraft attitude, and even a synthetic view of of the world outside the aircraft in conjunction with terrain data. It can even tell exactly which way the wind is blowing on the aircraft because this information falls out of the GPS/accelerometer calculation.
    197
  3. 142
  4. 130
  5. 122
  6. 101
  7. 100
  8. 81
  9. 60
  10. 52
  11. 49
  12. 41
  13. 36
  14. 33
  15. 26
  16. 24
  17. 24
  18. 24
  19. Great video on one of my favorite subjects. I'd like to add a couple things. First of all (as the poster below said), this history skips a very important branch of IC history, the gate array, which FPGAs (which are a namesake, the Field Programmable Gate Array). Basically gate arrays were ICs that consisted of a matrix of transistors (often termed gates) without the interconnect layers. Since transistors then, and largely even today, are patterned into the silicon wafer itself, this divided the wafer processing into two separate divisions, the wafer patterning, and the deposition of aluminum (interconnect). In short, a customer could save quite a bit of money by just paying for the extra masks needed to deposit interconnects, and take stock wafers to make an intermediate type of chip between full custom and discrete electronics. It was far less expensive than full custom, but of course that was like saying that Kathmandu is not as high as Everest. Xilinx used to have ads showing a huge bundle of bills with the caption "does this remind you of gate array design? Perhaps if the bills were on fire". Altera came along and disrupted the PLA/PAL market and knocked over the king o' them all the 22V10, which could be said to be the 7400 of the PAL market. They owned the medium scale programmable market for a few years until Xilinx came along. Eventually Altera fought back, but by then it was too late. However, Altera got the last word. The EDA software for both Xilinx and Altera began to resemble those "bills o' fire" from the original Xilinx ads, and Altera completely reversed its previous stance to small developers (which could be described as "if you ain't big, go hump a pig") and started giving away their EDA software. Xilinx had no choice but to follow suit, and the market opened up with a bang. There have been many alternate technologies to the RAM cell tech used by Xilinx, each with an idea towards permanently or semipermanently programming the CLB cells so that an external loading prom was not required. Some are still around, but what was being replaced by all that work and new tech was serial EEPROM that was about 8 pins and approximately the cost of ant spit, so they never really knocked Xilinx off its tuffet. My favorite story about that was one maker here in the valley who was pushing "laser reprogrammability", where openings in the passivation of a sea of gates chip allowed a laser to burn interlinks and thus program the chip. It was liternally PGA, dropping the F for field. It came with lots of fanfare, and left with virtual silence. I later met a guy who worked there and asked him "what happened to the laser programmable IC tech?". He answered in one word: contamination. Vaporising aluminum and throwing the result outwards is not healthy for a chip. After the first couple of revs of FPGA technology, the things started to get big enough that you could "float" (my term) major cells onto them, culminating with an actual (gasp) CPU. This changed everything. Now you could put most or all of the required circuitry on a single FPGA and the CPU to run the thing as well. This meant that software hackers (like myself) could get into the FPGA game. The only difference now is that even a fairly large scale 32 bit processor can be tucked into the corner of one. In the olden days, when you wanted to simulate hardware for an upcoming ASIC, you employed a server farm running 24/7 hardware simulations, or even a special hardware simulation accellerator. Then somebody figured out that you could lash a "sea of FPGAs" together and load a big 'ole giant netlist into it and get the equivalent of a hardware simulation, but near the final speed of the ASIC. DINI and friends were born, large FPGA array boards that cost a couple of automobiles to buy. At this point Xilinx got wise to the game, I am sure. They were selling HUGE $1000 per chip FPGAs that could not have a real end consumer use.
    21
  20. 18
  21. 17
  22. 15
  23. 14
  24. 13
  25.  @bunyu6237  I think others would supply better info than I on this subject, since I haven't been in the IC industry for decades, back in the late 1980's. At that time, the industry (reverse engineering) was moving from hand reversing to fully automated reversing. However, if you don't mind speculation, I would say there is no concrete reason why the reversing industry would not have kept up with newer geometries. The only real change would have been that its basically not possible to manually reverse these chips anymore. I personally worked on reversing a chip at about 4 generations beyond the Z80, which was not that much. At that time, blowing up a chip to the size of a ping-pong table was enough to allow you to see and reverse engineer individual transistors and connections. Having said that, I have very mixed feelings about the entire process. I don't feel it is right to go about copying others designs. I was told at the time that the purpose was to ensure compatibility, but the company later changed their story. On the plus side, it was an amazing way for me to get onboard the IC industry. There is nothing like reverse engineering a chip to give you a deep understanding of it. However, I would say I think I would refuse to do it today, or at least try to steer towards another job. For anyone who cares about why I have a relationship to any of this, I used to try and stay with equal parts of software and hardware. This was always a difficult proposition, and it became easier and more rewarding financially to stay on the software side only, which is that I do today. However, my brush with the IC industry made a huge impression on me, and still shapes a lot of what I do. For example, a lot of my work deals with SOCs, and I am part of a subset of software developers who understand SOC software design.
    12
  26. 11
  27. 10
  28. 10
  29. 9
  30. 9
  31. 8
  32. 7
  33. 7
  34. 7
  35. 7
  36. 7
  37. 7
  38. 6
  39. 6
  40. 6
  41. What people miss about the RISC revolution is that in the 1980s with Intel's 8086 and similar products, the increasingly complex CPUs of the day were using a technique called "microcoding", or a lower level instruction set inside the CPU to run instruction decoding, etc. It was assumed that the technique, inherited from mini and mainframe computers, would be the standard going forward, since companies like intel were increasing the number of instructions at a clip. RISC introduced the idea that if the instruction set were simplified, CPU designers could return to pure hardware designs, no microcode, and use that to retire most or all instructions in a single clock cycle. In short, what happened is the titanic turned on a dime: Intel dropped microcode like a hot rock and created pure hardware CPUs to show that any problem could be solved by throwing enough engineers at it. They did it by translating the CISC x86 instructions to an internal RISC form and deeply parallelizing the instruction flow, the so called "superscalar" revolution. In so doing they gave x86 new life for decades. I worked for SUN for a short time in the CPU division when they were going all in on multicore. The company was already almost in freefall. The Sparc design was flawed and the designers knew it. CEO Johnathan faced questioning at company meetings when he showed charts with Java "sales" presented as if it were a profit center (instead of given away for free). I came back to SUN again on contract after the Oracle merger. They had the same offices and the little Java mascots on their desks. It was probably telling that after my manager invited me to apply for a permanent position, I couldn't get it though their online hiring system, which was incredibly buggy, and then they went into a hiring freeze so it was irrelevant. I should also mention that not all companies did chip design in that era with SUN workstations. At Zilog we used racks full of MicroVaxes and Xwindow graphics terminals. I still have fond memories of laying out CPUs and chainsmoking in the late 1980s until midnight.
    5
  42. 5
  43. 5
  44. @D R That's a great question. In fact we could just use the altitude from the GPS system. Right now you dial in the barometric "base pressure", or pressure at sea level. This is used to calibrate the altimeter so that it delivers accurate results, I believe it is within 100 feet of accuracy (other sources say the FAA allows 75 feet of accuracy). Its a big big deal. A few feet could mean if you hit a building or pass over it. Thus when you fly, you are always getting pressure updates from the controller, because you are going to need updates that are as close as possible to the pressure in your area. So why not use the GPS altitude, which is more accurate? 1. Not everyone has a GPS. 2. Even fewer have built in GPS (in the panel of the aircraft). 3. A large number of aircraft don't re calibrate their altimeters at all. 4. New vs. old. Aircraft have been around for a long time. GPS not so much. If you know a bit about aircraft, you also know that number 3 there lobbed a nuclear bomb into this conversation. Don't worry, we will get there. First, there is GPS and there is GPS, implied by 1 and 2. Most GPS units in use are portable (in light aircraft). Long ago the FAA mandated a system based on transponders called "mode-C" that couples a barometric altimeter into the transponder. OK, now we are going into the twistly road bits. That altimeter is NOT COMPENSATED FOR BASE PRESSURE. In general the pilot does not read it, the controller does (ok most modern transponders do read it out, mine does, but an uncompensated altitude is basically useless to the pilot). The controller (generally) knows where you are, and thus knows what the compensating pressure is (no, he/she does not do the math, the system does it for them). Note that GPS had nothing to do with that mode C discussion. So for the first part of this, for a GPS to be used for altitude, the pilot would have to go back to constantly reporting his/her altitude to the controller. UNLESS! You could have a mode S transponder, or a more modern UAT transceiver. Then, your onboard GPS automatically transmits the altitude, and the position, and the speed and direction of the aircraft. Now we are into equipage. Note "onboard GPS". That means built into the aircraft. Most GPS on light aircraft are handheld, which are a fraction of the cost of built in avionics. Please lets not get into why that is, its about approved combinations of equipment in aircraft, calibration, and other issues. The mere mention of it can cause fistfights in certain circles. Ok, now lets get into number 3. If you are flying over, say, 14,000 feet, its safe to say you are not in danger of hitting any mountains, or buildings, or towers. Just other aircraft. So you don't care about pressure compensation. So the rules provide that if you are over 18,000 feet, you reach down and dial the "standard pressure" of 29.92 inches of mercury, which the FAA has decreed is "standard pressure" (the FAA also has things like standard temperature, standard tree sizes, etc. fun outfit). So what does that mean? Say you are TWA flight 1, and TWA flight 2 is headed the opposite direction, same altitude. Both read 18,000 feet. Are they really at 18,000 feet? No, but it doesn't matter. If they are going to collide, they are in the same area, and thus the same pressure. Meaning that their errors cancel. It doesn't matter that they are really at 19,123 feet, they both read the same. Thus climbing to 19,000 (by the altimeter) means they will be separated by 1,000 feet. So the short answer is the final one. The barometric system is pretty much woven into the present way aircraft work. It may change, but it is going to take a long time. Like not in my lifetime.
    5
  45.  @DumbledoreMcCracken  I'd love to make videos, but I have so little free time. Allen was a true character. The company (Seagate) was quite big when I joined, but Allen still found the time to meet with small groups of us. There were a lot of stories circulating... that Allen had a meeting and fired anyone who showed up late because he was tired of nobody taking the meeting times seriously, stuff like that. He is famous for (true story) telling a reporter who asked him "how do you deal with difficult engineers".. his answer "I fire them!". My best story about him was our sailing club. I was invited to join the Seagate sailing club. They had something like a 35 foot Catalina sailboat for company use, totally free. We ended up sailing that every Wednesday in the regular race at Santa Cruz Harbor. It was owned by Allen. On one of those trips, after much beer, the story of the Segate sailboat come out. Allen didn't sail or even like sailboats. He was a power boater and had a large yacht out of Monterrey harbor. He rented a slip in Santa Cruz, literally on the day the harbor opened, and rented there since. The harbor was divided in two by a large automobile bridge that was low and didn't raise. The clearance was such that only power boats could get through, not sailboats (unless they had special gear to lower the mast). That divided the harbor into the front harbor and back harbor. As more and more boats wanted space in the harbor, and the waiting list grew to decades, the harbor office came up with a plan to manage the space, which was "all power boats to the back, sailboats to the front", of course with an exception for working (fishing) boats. They called Allen and told him to move. I can well imagine that his answer was unprintable. Time went on, and their attempts to move Allen ended up in court. Allen felt his position as a first renter exempted him. The harbor actually got a law passed in the city to require sailboats to move to the back, which (of course) Allen termed the "Allen shugart rule". Sooooo.... comes the day the law goes into effect. The harbormaster calls Allen: "will you move your boat". Allen replies: "look outside". Sure enough, Allen moved his yacht to Monterrey and bought a used sailboat which was now in the slip. Since he had no use for it, the "Seagate sailing club" was born. It was not the end of it. The harbor passed a rule that the owners of boats had to show they were using their boats at least once a month. Since Allen could not sail, he got one of us to take him out on the boat, then he would parade past the Harbormaster's office and honk a horn and wave. Of course Allen also did fun stuff like run his dog for president. In those days you either loved Allen or hated him, there was no in-between. I was in the former group, in case you could not tell. I was actually one of the lucky ones. I saw the writing on the wall, that Segate would move most of engineering out of the USA, and I went into networking for Cisco at the time they were still throwing money at engineers. It was a good move. I ran into many an old buddy from Seagate escaping the sinking ship later. Living in the valley is always entertaining.
    5
  46. 5
  47. 5
  48. 5
  49. 5
  50. 5
  51. 4
  52. 4
  53. 4
  54. Good job. There was a bit of conflation there with microcode (its firmware?). It would have helped to underline that it is entirely internal to the chip and operates the internals of the CPU. In any case, microcode was discarded with the Pentium series, KINDA. It actually lives on today in so called "slow path" instructions like block moves in the later cpus, which use microcode because nobody cares if they run super fast or not, since they are generally only used for backwards compatibility and got deprecated in 64 bit mode. I await the second half of this! Things took off again with the AMD64 and the "multicore wars". Despite the mess, the entire outcome probably could have been predicted on sheer economic grounds, that is, the market settling into a #1 and #2 player with small also-rans. Today's desktop market, at least, remains in the hands of the x86 makers except for the odd story of Apple and the M series chips. Many have pronounced the end of the desktop, but it lives on. I have many or even most colleges who use Apple macs as their preferred development machines, but, as I write this, I am looking out at a sea of x86 desktop machines. Its rare to see a mac desktop, and in any case, at this moment even the ubiquitous Mac pro laptops the trendsetters love are still x86 based, although I assume that will change soon. Me? Sorry, x86 everything, desktop and laptop(s). At last count I have 5 machines running around my house and office and 4 laptops. I keep buying Mac laptops and desktops, cause, you know, gotta keep up with things, but they grow obsolete faster than a warm banana. Yes, I had power PC Macs, and yes they ended up in the trash. And yes, I will probably buy Mac M2s at some point.
    4
  55. 4
  56. 4
  57. 3
  58. 3
  59. 3
  60. 3
  61. 3
  62. 3
  63. 3
  64. 3
  65. 3
  66. 3
  67. 3
  68. 3
  69. 3
  70. 3
  71. 3
  72. 3
  73. 3
  74. 2
  75. 2
  76.  @ttb1513  The last company I work at silicon design for, Seagate, was emblematic of the basic problems with ASICs. It was 1995 and they were new at ASIC design in house. And they weren't very good at it. It was random logic design before Verilog. An example of this was our timing verification. We had timing chains that were too long for the cycle time, and thus they simply alllowed for the fact that the signal would be sampled in the next clock cycle. Now if you were an ASIC designer back then, what I just said would have made you reach for the tums, if not a 911 call for cardiac arrest. Its an open invitation to metastability. And indeed, our AT&T fab guys were screaming at us to stop that. I got put in charge of hardware simulation for the design, and I have detailed this fiasco in these threads before, so won't go over it again. The bottom line was that ASIC process vendors were loosing trust in their customers to perform verification. The answer was that they included test chains in the designs that would automatically verify the designs at the silicon level. It mean that the silicon manufactured design would be verified, that is, defects on the chip would be verified regardless of what the design did. My boss, who with the freedom of time I can now certify was an idiot, was ecstatic over this new service. It was a gonna fixa all o' de problems don't ya know? I pointed out to him, pointlessly I might add, that our design could be total cow shit and still pass these tests with flying colors. It was like talking to a wall. In any case, the entire industry went that way. Designs are easier to verify now that the vast majority of designs are in Verilog. I moved on to software only, but I can happily say that there are some stunning software verification suites out there, and I am currently working on one, so here we are.
    2
  77. 2
  78. 2
  79. 2
  80. 2
  81. 2
  82. 2
  83. 2
  84. 2
  85. 2
  86. 2
  87. 2
  88. 2
  89. 2
  90. 2
  91. 2
  92. 2
  93. 2
  94. 2
  95. 2
  96. 2
  97. 2
  98. 2
  99.  @AlexanderSylchuk  Oh, you are begging for my favorite story. I interviewed with a company that made precision flow valves. These were mechanical nightmares of high precision that accurately measured things like gas flow in chemical processes. This is like half the chemical industry (did you know a lot of chemical processes use natural gas as their feed stock?). Anyways, what has that got to do with this poor programmer? Well, like most industries they were computerizing. They had a new product that used a "bang bang" valve run by a microprocessor. A bang bang valve is a short piston that is driven by a solenoid that when not energized, is retracted by a spring and opens a intake port and lets a small amount of gas into a chamber. then the solenoid energizes, pushes the piston up and the gas out another port. Each time the solenoid activates, a small amount of gas is moved along. Hence the "bang bang" part. If you want to find one in your house, look at your refrigerator. Its how the Freon compressor in it works. Ok, well, that amount of gas is not very accurately measured no matter how carefully you machine the mechanism. But, it turns out to be "self accurate", that is, whatever the amount of gas IS that is moved, it is always the same. The company, which had got quite rich selling their precision valves, figured they could produce a much cheaper unit that used the bang bang valve. So they ginned it up, put a compensation table in it so the microprocessor could convert gas flows to bang bang counts, and voila! ici la produit! It worked. Time to present it to the CEO! The CEO asks the engineers "just how accurate is it?" Engineer says: well... actually it is more accurate than our precision valves. And for far cheaper. The story as told me didn't include just how many drinks the CEO needed that night. So the CEO, realizing that he had seen the future, immediately set into motion a plan to obsolete their old, expensive units and make the newer, more accurate and cheaper computerized gas flow valves. Ha ha, just kidding. He told the engineers to program the damm thing to be less accurate so that it wouldn't touch their existing business. Now they didn't hire me. Actually long story, they gave me a personality test that started with something like "did you love your mother", I told them exactly where, in what direction, and how much force they could use to put their test and walked out. I didn't follow up on what happened, mainly because I find gas flow mechanics to be slightly less interesting than processing tax returns. But I think if I went back there, I would have found a smoking hole where the company used to be. And that is the (very much overly long) answer to your well meaning response.
    2
  100. 2
  101. 2
  102. 2
  103. 2
  104. 2
  105. 2
  106. 2
  107. 1
  108. 1
  109. 1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. 1
  121. 1
  122. 1
  123. 1
  124. 1
  125. 1
  126. 1
  127. 1
  128. 1
  129. 1
  130. 1
  131. 1
  132. 1
  133. 1
  134. 1
  135. 1
  136. 1
  137. 1
  138. 1
  139. 1
  140. 1
  141. 1
  142. 1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. 1
  158. 1
  159. 1
  160. 1
  161. 1
  162. 1
  163. 1
  164. 1