Hearted Youtube comments on Asianometry (@Asianometry) channel.
-
119
-
118
-
118
-
118
-
118
-
117
-
117
-
117
-
116
-
115
-
114
-
114
-
114
-
113
-
112
-
112
-
111
-
110
-
110
-
109
-
109
-
109
-
Yours is one of the most interesting channels on YouTube, I think. Tech, history, economics, geopolitics are all combined in a way that is digestible and sensible.
As you pointed out, each new node generation is more expensive than the last. Demand for chips is only going up, but fewer and fewer companies can hang a shingle for the latest node.
TSMC believes it can succeed at 3 nm. That will be difficult, in terms of expense, reject rates and reliability. Such a small node leaves chips vulnerable to stray electromagnetic radiation and cross-channel electron bleeding.
And after 3 nm?
Photo-etching on 2-D silicon is nearly at the end of its journey. If Moore's Law is to continue, a new chip concept will be required, and sooner rather than later, a concept that leaves plenty of room for future advances. It would be good if it were cheap, too, so that many companies can jump in and compete for rising demand.
We need innovation! Who will supply it? And what will it look like?
I have no idea.
In principle, different semiconductors (beyond silicon) might come into play. I'm not sure how far that will extend the runway for 2-D optically-etched chips. Might not be very far at all.
In principle, moving to 3-D designs etched by some method other than projecting an image onto a 2-dimensional surface might be tried (but I can't imagine what that etching process might look like).
In principle, quantum computing might change everything - though a complete, economically-viable, better-than-silicon solution remains far in the future, at best. At worst, quantum computing will be a niche adjunct to silicon processing. Or at really worst, nothing from quantum computing will arrive at commercial viability at all.
Photonic computing? Who knows.
The irony is that matter appears to be compute-intensive. Everything computes. We just don't know how to take advantage.
109
-
108
-
108
-
107
-
107
-
107
-
106
-
106
-
Initially, FPGA's were not a serious threat to LSI's gate-arrays/standard-cell designs because the per-unit cost of the FPGA devices were very high by comparison, and the turnaround time for gate arrays was very fast. When I was an LSI customer in the early 1990s (at Intel), I got silicon back in 8 days (gate array), and 17 days (standard cell with 3 metal layers). ASIC costs were quite low, too. Gate-array NRE charges were around $10K for the LMA100K designs I did in 1 micron, and I think the LCB007 standard cell which was around 0.7 micron were around $35K. From there, you went into production at costs well below $20 per chip and that was way less than FPGAs.
Now, as process technology advanced, the number of masks increased and the costs exploded. NRE costs for 0.18 microns (LSI's G11 process) was $250K and up. At the same time, FPGA costs were coming down as the number of available gates continued to increase, so LSI got squeezed at both ends: Better-faster-cheaper FPGAs luring customers away from the low-end, and the high-cost of taping-out designs at the high end.
I just retired from the chip industry last year. When I started in 1985, I literally did the entire design myself (architecture, design, simulation, debug, layout, timing analysis, test vectors, etc). Today, you have an army of engineers working on chips, and we only get to work on small bits of the design.
105
-
105
-
105
-
103
-
103
-
102
-
102
-
102
-
101
-
101
-
100
-
100
-
99
-
99
-
99
-
99
-
99
-
98
-
98
-
97