Comments by "Siana Gearz" (@SianaGearz) on "Asianometry"
channel.
-
52
-
42
-
35
-
33
-
31
-
26
-
24
-
23
-
23
-
22
-
20
-
20
-
20
-
18
-
18
-
16
-
16
-
15
-
13
-
12
-
12
-
11
-
11
-
10
-
9
-
9
-
9
-
9
-
8
-
As it happens, i have some relevant experience. 10 years ago, i was a developer on a commercial 3D engine - at the time, it was slowly failing as a game engine, we were shipping some WiiWare and XBLA shovelware, but for all intents and purposes, Unity was eating our lunch. Before Unity came about, we had the most popular indie and small-scale game engine in the region, with some AAA-ish successes. With just a couple exceptions. One, is that we have carved out a niche in virtual studio, that is all these news and weather and all kinds of TV broadcasts which were filmed against a bluescreen, so they had camera tracking, realtime compositing, and the studio would be assembled and rendered in a game engine. I didn't think this trend ever looked good or was good and as it happens, now it's dead, the virtual studios have been gone for 5 years. The second was the CG movie previsualisation. The problem is, when you're filming a movie against blue/greenscreen, where do you put the camera, what will actually be in the shot once the post production team is done with it? Previs doesn't have to look entirely photorealistic, but it has to be all the right things in the right place and it has to respond to lighting, and it solves that problem. On-set previs with camera tracking will allow the DP and director to see on a screen an adequate approximation of what they're aiming towards, on set. So that was our second hope to survive, and we were well equipped, given existing camera tracking partnership and TV foothold and the technical traits that we offered. Unfortunately, a lot of other companies also recognised the need, it was actually a massive push at the time, Crytek was actually the only big game engine players represented in this area directly, and they had a lot going for them, like A LOT, others had some sort of ad hoc tech specialised for the purpose, bound more closely to industry animation software, or were built also on top of existing game engines.
What's happening now is that Unreal is eating everyone's lunch, it's digging hard under Unity on all fronts. Indie devs are also gradually less interested in Unity, and well you have mentioned The Mandalorian yourself. What Unity needs is a foothold and some industry expertise to enter this industry. If they can get at least one major studio not going Unreal, they have at least some hope in this industry. There appear to have been a number of what i might describe as panic acquisitions by Unity just to make sure Epic/Unreal doesn't get there first.
Also it is a little funny to hear a company where core engineering on their namesake product is from Denmark and is assembled from European devs first and foremost to be described as "American" but that's the world we live in, i guess.
7
-
7
-
7
-
7
-
7
-
6
-
@megalonoobiacinc4863 Pollution? No. I would say pollution is something that you release into the environment uncontained, and i don't anticipate substantial release like that in the future, and to a small extent, radioactive material is just naturally present in the environment, so as long as you stay well under this order of magnitude, it's fine - and also to be considered that burning coal releases radioactive pollution as well.
Contained waste products that need long term management is what you might be thinking of. Ultimately the consensus has emerged that you can just dig a deep enough hole in some vaguely stable rock and bury the previously stabilised stuff in there, and then forget about it and it's absolutely fine long term without reservations, at least at foreseeable future scale of waste production (you possibly can't just scale it up endlessly); but then there's NIMBYs.
Other solutions include further use of what is currently waste but there are political roadblocks as well.
Ultimately current form of nuclear power is a temporary measure, to hold us over for the maybe next sub 50 years until we figure something out. As far as temporary measures go, i'd say it's pretty decent as long as it isn't the only hydrocarbon independence measure taken but part of a broader strategy. It's also a temporary measure because we're going to run out of uranium, so our potentially limited ability to safely entomb waste is well balanced by the raw resource being limited in the first place. So waste isn't really a problem.
6
-
6
-
6
-
5
-
Strong disagreement on all points. German engineering businesses have their hands in a LOT of pies, thousands of them. It has semiconductor business and others. Infineon is one of the larger semiconductor companies and is German. Germany also has one of world leadership positions in chemistry (BASF etc), medical technology, robotics and factory and engineering equipment, and companies like Bosch are involved in more engineering and manufacturing ventures than you can count, always behind the scenes, critical supplier.
SSDs will almost fully supplant HDDs when the price is right for almost all uses, and the price of SSDs has basically no lower bound in sight. HDD capacity per square inch has been barely growing in the last 10 years, and it is an extremely complex assembly which can no longer be reduced in cost.
SSDs are suitable for an approximately 5-7-year online storage use, just like HDDs. So when the device is normally connected to power, it will retain the data, and can survive power removal for a moderate length of time. What SSDs are not suitable for is offline storage. So if you disconnect an SSD from power, after a year or two of being unpowered, it will rapidly start losing data, while an HDD loses data when online, but remains perfectly stable offline. An SSD while connected to power periodically "refreshes" so reads and rewrites data stored on it, to nudge the gate voltages into alignment again and reduce data loss, while when it's not powered, the drift occurs uncontrolled. The refresh itself consumes a small percentage of SSD's overall limited write endurance.
In this context, "online" means whether the device is powered and running and the data is accessible within fractions of a second after being requested.
Currently, the vast overwhelming majority of data storage is HDD-based, online, redundant.
To the extent that SSDs exhibit a data loss even online, it's only a matter of redundancy and cost. So when SSDs reach half the price/GB of HDDs, you can simply use twice as many SSDs to store data more redundantly and you will also use less power than with HDDs, and the break even point will come much earlier than that. Considering SSD power efficiency and power cost, it's likely when price/GB reaches near parity, HDDs are dead for all uses except offline backup.
In offline storage, HDD will face more competition from tape storage. Currently, HDD only wins in offline storage because low price of HDD is facilitated by the large mass market that needs HDDs and uses them for online storage and general use, which will break away. When HDD price rises, there will be a re-evaluation that favours tape again. There is magnetic tape (LTO) and potentially optical laser tape. I think Seagate is gearing up for LTO expansion.
The economic cross over point and thus extremely drastic demand decline of HDD will occur within 5 years.
5
-
5
-
5
-
5
-
5
-
I don't know shit about Mexico but i'll tell you what about bits and bobs of Soviet Union. Like that whole Russia, Ukraine, Kazakhstan, and the rest of them? Well maybe not right now, but if you look back 10, 15, 20, 25 years, nothing happened there either.
So first off why you'd wanna. Well there's the people, who are highly educated, lots of engineers, scientists, but also skilled former factory workers, from when these countries fully supplied themselves but industrial automation and efficiency was low. With a little money they will create, automate, optimise, and they're naturally resourceful, they're used to things not being quite by the book, things being impossible to acquire, but they invent and make do. There's also space and natural resources so the cost factors are very much on your side.
But you can't, it would be the worst idea ever. Reason number one is organised crime. "Racket" they call it. Shakedowns. Everything you build, any machinery you acquire, it's not safe, you are probably going to lose it.
And of course the organised crime has all the ties to the police and government, you as a budding industrialist do not. They shield each other. The bureaucrats have so gotten used to being fed, they have created a system where they can do anything and you can do nothing, they will create endless impediments, and you will be out of an unpredictable amount of money trying to bribe them, and bribes aren't optional, they are a hard necessity of doing business. You won't even know how and where and why everything is crumbling around you.
Even if you were able to navigate the whole thing, like you make your connections to organised crime and you actually get protection and help navigating the bribes, you are likely still to get caught in the crossfire between different crime syndicates. And the hidden tax is overall incalculable, and expands to encompass anything you might earn.
So yeah i don't know shit about Mexico, but if they've got some of those issues on their hands, you can't build there.
5
-
Zaydan Naufal Yeah i have Vizplex based e-readers, but they aren't very suitable for general tasks, actually i have one SONY with a touchscreen and it's an annoying piece of garbage, and i had one by Chinese company Hanlin customised by a Ukrainian partner company, LBook, it was actually top notch.
I used to do quite a bit on my Casio PocketViewer and Psion5 in the late 90s, early 2000s. Those had monochrome screens, worked nicely enough in sunlight. I'm pretty sure i had some pretty hefty books on my PocketViewer as well. Oh wait i should just drop some batteries in there, i bet it still has all the data from back then... as opposed to Palm Pilot and Handspring Visor... although Visor Edge just feels SO NICE, i actually have one of those as well, so nice.
I actually still play the original Gameboy Advance outdoors :D the only game console that works in the sun better than indoors.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
@ffls775 There's a large semiconductor industry, yes, automotive, industrial, power semiconductors, sensor devices, specialised stuff. But maybe a little too focused on legacy products and a little aging if my impression is correct? A lot of these are long running products though that don't change all that much decade by decade, new engineering insight has been drying up because the product is well polished, and the well-worn processes are better than fresh ones for a number of things. Legacy customers too, the likes of Toyota and Aisin don't necessarily like change. But just go ahead look through product catalogue of Toshiba semiconductor, Rohm and Renesas, these should be the largest ones. A number of smaller companies are now in Taiwanese hand but should still be manufacturing in Japan i guess.
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@tomlxyz There was a shortage incoming. There hasn't been enough capacity provisioning to meet demand medium term. Even without the bursts in demand due to work-from-home and crypto boom(s), the capacity would run out shortly anyway, maybe not in late 2020, but in just a couple years. Every year, people need slightly more flash storage, more cloud servers, more this and more that, now fridges have computers too for some unfathomable reason.
Overprovisioned capacity is, to an extent, a good thing. Sure it just costs money when it's not needed, but not having capacity when it's urgently needed can be even more devastating. Whole industries depend on this capacity being present.
Besides, the big growth cycle right now isn't actually that big relative to the existing size of the industry.
3
-
Oof that topic. Unfortunately the research is reeeeally shaky, and the public understanding of it is even worse. Odds are, if you've heard of microplastics, the only thing you've heard is that you eat 5g worth of plastic every week in form of microplastics. It seemed like it required a little explanation to me, so i followed where it came from.
The figure actually comes from a private report done to the WWF Singapore by scientists of University of Newcastle AU. Then, WWF produced pamphlets with the wording that people "could be ingesting approximately 5 grams of plastic every week on average".
Subsequently the scientists, Kala Senathirajah et al produced their own paper with the underlying methods and conclusions. The paper has two halves basically, first they estimate the number of particles an average human ingests, and the sources and pathways of it, it's meta-study or a survey of existing research and it's actually pretty good, it comes up with a figure of 2000 particles a week with a confidence window of an order of magnitude.
Continued in the next comment because i'm running out of comment length.
3
-
Continued...
The second half of the paper estimates the weight then, by fitting different solid shapes of up to 1mm in size (wat) to the number of particles, and... well it's really a load of handwaving at that point. The study authors themselves give their confidence range of between approximately 0.1g/week and 5.6g/week, with their best-effort estimate being 0.7g/week. And given their methods, i think they are undercounting their confidence interval, i think it's much worse than that, with several orders of magnitude of actual confidence. Really with existing data, no good statement could be made on how much plastic in terms of weight one ingests. That WWF chose to run with a 5g figure should tell you something. They presumably requested this estimate and then they took something near to the upper bound and ran with it, it's purely a scare tactic to drum up more funding. Which i'm a little ambivalent on, given yeah more funding is needed; but i wouldn't entrust WWF with it. It erodes the public trust into science and eco activism both, since it's fundamentally a dishonest tactic; someone at WWF will pat themselves on the back for a successful campaign, for winning a battle, while edging closer to lose the war.
I also think lack of good weight data is not entirely a coincidence, due to limited relevance of bulk weight of small particulate, as environmental erosion and chemical interaction are surface based. For weight, what you want to know is how much (macro)plastic you put into the ocean, and that's hardly a secret. Microscopic surface estimates are difficult, but particle count might just be a good enough proxy. Just not for weight. Like you request useless data, you get useless conclusions.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
I do think the world, especially US&Europe has done a MASSIVE mistake by not treating the disease in the first few weeks as if it was exceptionally dangerous and super deadly, just to err on the side of caution, a very temporary true zero case goal. Maybe it could have been stopped dead in its tracks and we would have avoided the whole everything that followed. This is something i advocated back in Feb-March 2020 but i'm a nobody, i decide nothing, and indeed i have no relevant professional background or advanced knowledge, but then, it's not like politicians put epidemiologists in the driving seat on this either; i saw no opportunity that it could have worked several months later as the situation developed.
In turn China having to deal with the issue now, is from it being reimported from the rest of the world. Developing world got it from the first world, and there it kept mutating and couldn't be stopped.
As to longer term policies, i don't have a strong definitive opinion. It pains me to see people die, and it pains me that some people are so dumb as to bring themselves and people they encounter in easily avoidable increased danger; but the even semi lockdown state with limited freedom is also insanely depressing and has consequences as well, as it goes on for months and years.
2
-
I don't know for certain, but as far as i can see here from Europe, diminishingly little or not at all? Chinese semiconductors and rest of the world are like a semi permeable space, like, Chinese stuff has some availability (some product lines are canned) but all the US, Euro, JP, even Taiwan companies have pretty much ZERO availability of trailing edge chips. Chinese electronics companies use Chinese stuff and classic global stuff, after all they learn from the rest, non-Chinese companies use non-Chinese stuff almost exclusively, with very little spilling in, say via Diodes Inc acquisitions and this and that, but really nothing all too important there. These days people around the globe are starting to familiarise themselves with Chinese domestic chips in part even just to have an alternative available, and Chinese companies are starting to write English datasheets more than they used to.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I don't see GPUs being obsolete in near future.
What happened with soundcards is that they grew ever more complex, synthesizers, gaming 3D audio, all the stuff. Actual processing power. But then, CPUs outran them and it made more sense to just integrate a simple dumb soundchip on the mainboard. By the way, Intel killed HD Audio interface (i2s multichannel + i2c command) that was used previously, now onboard audio is USB. I actually find that exciting, maybe we'll see Crystal (Cirrus Logic) or Burr-Brown return. Crystal is gearing up for sure.
Now, CPU progress is slowing down, but GPUs are still trending up in performance, growing with software (game) requirements, into monsters which far outstrip the power budget of the CPU package. Memory interface of the CPU is unfortunate for them too. For sure i see a possible distant future where everything is an SoC, but not yet, not in the next 5 years. Anyone who says they can see longer than that into the future probably lies.
Now main GPU uses besides game are crypto and AI, and these are of course just waiting for ASICs to catch up.
But indeed as far as standard office computers, laptops as well, no need for a dedicated GPU, but been that way for decades.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
It's a horrible waste of money. We have seen that in Russia. Massive amounts of money and no output. Plus the expetise of the institutes maintained this way is... debatable, since their hands-on experience is with long obsolete technology and doesn't include overcoming scale and quality obstacles. The premise that you can have a handful of experts is flawed, since production of semiconductors requires combined expertise of thousands of people, whether these people actually do something or only sit on their hands because they don't have equipment funds, a customer, etc. Every person is capable of developing and internalising experise in only a very small part of the process. Then you have brain drain, you teach up people as best you can, but after a while sitting on their hands, people get frustrated and up and go somewhere where they can develop their skill more and they never come back.
Hibernated insular industries are extremely wasteful. You either need to leverage someone else's hard earned expertise and economy of scale, or you develop your capability to competitiveness, either way you need to slot into the international ecosystem.
2
-
2
-
1
-
Were the lines shut down? Recently - i don't think so. Most of the 200mm shutdown happened decade+ ago. The industry was moving to 300mm, so it wasn't a temporary problem, it was lack of future viability seemingly.
Except then it turned out everything needs legacy semiconductor devices huh, and the total consumption of electronics keeps growing. Hindsight 20-20. By the time it became evident that there may be a problem on our hands several years ago, there wasn't much to be done about it other than just to try to move things to fresher nodes. Try.
And i mean temporary problems which last for 5+ years aren't so temporary if there's no money to be made in the interim, you run dry, you run bankrupt. Where do you get money from, what kind of an investor pitch is it when you say "help us maintain ancient obsolete stuff, trust me bro it'll matter someday again, just a rough patch bro, we know it's looking like what we're making is useless in 5 years but we just need to sit it out bro". Line on a chart going down tends to be a death sentence.
1
-
@UpLateGeek I am PAINFULLY aware of 2 year lead times, you don't have to tell me. I don't think anything substantial was shut down during pandemic, or i haven't heard of that. I know car manufacturers have dropped some orders, and then scrambled to get them back, but it wouldn't cause fabs and suppliers to get closed. They couldn't get new orders in because the capacities were already booked by some other semiconductors.
Fundamentally, it's unnecessary that anything got shut down to get us where we got. Ingredient 1, is that electronics consumption keeps growing year to year by maybe a couple percent. 2, everything needs legacy components, jellybeans, classics, which are manufactured using old tech and which are not economical or impossible to move to new tech. 3, so as total electronics demand and demand for modern electronics is increasing, it drags alongside with it a demand in legacy components upwards. 4, in 2020, worldwide electronics sales shot up by 14%, which is WILD, pretty much never happened before; and the demand is remaining high even now.
Given the huge jump in demand, some end device manufacturers cancelling their chip orders doesn't matter and wouldn't cause production stops, as the total demand increase is more than high enough to absorb any capacities that were in use previously.
I have heard a conjecture somewhere that the problem with legacy component production capacity was a long time coming simply because demand keeps growing, but the legacy capacity is all we've had for a decade; but it wasn't expected to actually hit until 5+ years later, and more gradually, and was pushed to now and urgent by a sudden jump in demand. Do keep in mind that we have some overcorrection on our hands, that electronics manufacturers which thought that they didn't need to keep extra stock of jellybeans and were buying them on a weekly basis as needed (Just in Time manufacturing), now suddenly do, and many have decided to keep a year of advance stock that they're currently filling up; so the situation will take a long while to relax a little.
1
-
1
-
@punditgi yeah it's down to parts and when you design something in Europe, you can't just be designing in European parts you know, you're going to have a mixture of American, French-Italian, Japanese, Taiwanese and Chinese semiconductors, and in turn they use standards borrowed from different company standards from somewhere else in the world entirely with a different design culture, the footprints are pretty universal across the world and they are universally a mess across the world. Just some company did something and then the rest did something footprint-compatible and it became a standard eventually.
Soviet Union tried to put things on a millimetre grid but that was even more infuriating, since you might have functionally compatible and similar looking chips with both 2.54mm spacing (international) and 2.5mm spacing (domestic) and domestic chips were occasionally produced in export leadframe so you never know what you get and the subtle and prone to bad contact misalignment on a 40 pin socket oh my God, and I can't even bear the thought of a 64-pin chip going through that sort of torture...
Oh yeah that kinda still a thing in the rest of the world, with some connectors having same diameter pins but pitch of 2.5 or 2.54mm, so best not confuse the footprints or think of a last minute substitution... But at least that's not chip sockets.
Newer footprints tend to be based on metric but we couldn't get rid of occasional Mil footprints in 35 years... So yeah that's how it stays.
You can tell I built my first computer in Soviet Union - by soldering it together chip by chip. And I had a curse of finding only an imported Z80. But only domestic DRAM. It was the worst combination.
1
-
1
-
@autohmae When powered, SSD have excellent long term endurance, not really substantially better or worse than HDD, but they do need to be able to auto-refresh the data once in a while, which consumes just a little percentage of their RW cycles, thus they need to stay powered on regularly. The reason HDD are used for cold storage is because they survive that nicely and are cheap. And they are cheap because there's a huge mass market that needs them for their sheer capacity per $ as online, powered up storage. Once that market collapses due to price parity, HDD should go up in price because economy of scale collapses; probably will cause tape to become more prevalent for cold storage. LTO or maybe laser optical. Seagate is one of the forces behind LTO, WD and Toshiba are major SSD/Flash players.
In 10 years, capacity per platter went from exactly 1GB to 2.2GB with heat or microwave assist. How much in the next 5-10 years? Maybe 3GB? Maybe 4GB? At increasing expense in hardware, and no way to keep pace with SSD tech that is in engineering stages and where expense is trending down with no roadblock in sight.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@sciencecw But there was literally no population reduction in 2014, nor in 1989, nor in 1990, nor in 96, 97, so... what does it have anything to do with the topic of whether population is growing or not? In fact in 96 the population grew by well over 4%, which is remarkable. Like sure some people go away, others come in to replace them, because the region is relatively prosperous, and these new people have different circumstances and aren't afraid of whatever drove the others out.
Right now you have two things overlapping, the epidemic and its effects on the world economy, and political shakeups specific to HK, and i think it'll likely recover and keep growing slooooowly. Every exodus wave stops eventually.
1
-
1
-
1
-
1
-
1
-
@msimon6808 Sometimes they do, sometimes they don't. I don't think low-ESR electrolytic caps are a good idea to begin with, since they become standard-ESR caps with time or even no-caps in my experience, with a few nice exceptions. When you need to handle high ripple currents, you might as well pair a standard-ESR electrolytic with a ceramic of maybe 1uF or 100n, where the ceramic will take up the bulk of ripple current and the lytic will take up the bulk of capacity. When used correctly (which, i know, is maybe a little much to ask for of mainstream manufacturers today), even cheap standard aluminium electrolytics are generally very reliable, which is not said easily about low-ESR types.
It's probably mostly a cost saving measure, since i had to specifically buy 10uF 25V ones in 1206 what a coincidence, and they cost... like a substantial amount of money, several cents per cap, while even low-ESR lytics from some off-brand cost nothing. The ceramics have their own pitfalls as well - the capacity is different every time you look at them, and they often fail short, presumably due to defects or external thermal or mechanical shock rather than electrical abuse. Companies aren't really interested in whether something will survive beyond 2-3 years, but they do need to keep warranty period failures fairly low.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@trudyandgeorge actually a fair number of circuits support automatic output muting to gnd; and some others have enough precision to actually drive very low magnitude signals quite precisely, though that's actually trickier.
However you also have quantisation noise, from the discrete amplitude values that can be recorded. This noise overlays all sound, and for a CD, it's at a magnitude of about -96 or -98dB, i forget which marks the nominal dynamic range.
But we have noise shape dithering, which shifts this quantisation noise into a band where hearing sensitivity is lower, up in the frequency range, and keeping it out of the sensitive midrange. So the perceived dynamic range of CD is actually larger than the nominal figure. Various values have been suggested where one should stop chasing extra equipment fidelity, usually about -99 to maybe -110 dB noise and distortion in midrange.
Pain threshold is generally around 120dB but you have to consider that you're not in a noise free environment, hearing threshold is 0dB but a quiet room is around 25dB environment noise. If you wear headphones to block it out, your hear your blood vessels making noise. Music is usually mastered for 80-85dB average listening level, with it being perhaps 10-15dB below peak amplitude, more for classical. Even -80dB noise floor can appear quite inoffensive, usually hidden, if not quite ideal.
1
-
1
-
1
-
@simplemechanics246 Different video delivery method. Androids only have H264 hardware, so YouTube ships only H264 video to Android and other mobile-like devices. But on PC, they know you have a bigger CPU, so you get VP9, AV1. VP9 is something in particular that is Google property and GPUs have shown some resistance to adopting it, and AV1 also has hardware compatibility issues. So often you get a software-only decoding, CPUs are slow at this and shovelling that much video data to GPU is hard. Then differences in surrounding software, webpage with control overlays vs. an optimised app. Google controls, understands and optimises the whole Android ecosystem, but on PC things are "good enough" and are left at that, so a lot of sloppy craftsmanship. As to other video services, there's again web overhead, DRM overhead much more on an open platform like PC - on Android they can just check that it's locked down.
Console APUs are scary at around 300W, you can't simply put that in a PC, you need to design the computer around something as powerful as that. Of course it's a matter of time till this power fits into a 125W envelope, at which point it very much can be shipped in a PC, which is why nobody is in a hurry to design a PC platform which can carry a 300W APU. Would probably take more than 3 years, and not certain either whether it'll happen.
1
-
1
-
China has half way burned through the workforce which would tolerate anything you throw at them about a decade ago, and today young work force is increasingly revolting against even moderate inconveniences, or just lie flat, do bare minimum, and Chinese worker salaries and costs of living have approached European ones. Long term, there will be another low wage country to offshore to, the advantages China maintains in the foreseeable future is infrastructure, supply routes, and DFM engineering, i.e. it isn't all that nominally cheap to get stufff done in China, but it's quick and efficient to get stuff done in China, since they've got all you need. This points to a missed opportunity: had we lot taken care of infrastructure that way, we would be well equipped to maintain local industry.
That not to negate anything you said, just an aside observation.
1
-
1
-
1
-
1
-
1
-
@IrishCarney Fair. Buran was fairly ill conceived and useless, there wasn't really a good place for it in the grid of Soviet space requirements, but it was made because "oh look they have a reusable spacecraft, and we should do something similar", just as you say.
I wouldn't say that semiconductor push of East Germany was useless. After all Soviet Union was cut off from Western and Japanese semiconductors and semiconductor manufacturing tech suppliers, and VEB Elektron filled a vital niche for that whole region, it could potentially make a lot of bank under the circumstances. It was wasteful and there was mismanagement at hand. I think crucial thing is that already somewhere between 1986 and 1988 it became possible to import Japanese and European ICs into Soviet Union, and they weren't exactly expensive compared to the cost of semiconductor operation, so East German industry had to either adapt (at least get honest and stop chasing increasingly expensive stuff that they can't actually build economically) or crumble. The fact that Soviet Union was opening up and trying to reduce the amount of mismanagement and waste, and the rest of the world was opening up towards Soviet Union, broke East Germany's business model that they bet big on.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
But FPGA is also an SRAM device, which makes it somewhat slow, space and power inefficient and cost inefficient compared to dedicated silicon at sufficient volume. Back decades ago, you'd have antifuse FPGAs which were essentially programmable ROM based configurable logic devices, but those seem to have died out, having been in a bit of a no-mans-land between in-system programmable devices and dedicated silicon. Before that, there were ULAs, which were mask ROM configurable logic devices.
And then microcontrollers and configurable peripherals such as DMAC and if you look at something like RP2040's PIO peripheral probably eat quite substantially into CPLD/FPGA territory for a lot of tasks these would have been used for prior, as they're much easier to program. You're also correct in recognising a configurability increase in insanely complex circuits, and that it's absolutely vital, like PC CPUs contain fixed ALUs but the actual software facing instruction set is implemented as "microcode", a rough and usually barely functional version of which is on the silicon, but it gets updated from the PC firmware on every boot. Spinning up such a device costs a fortune and they are prone to bugs being discovered in the field which need to be fixed. Also so many hardware designs have leaned into SoC territory as time went, like "oof, this is hard - let's stick a processor core in there", and thus everything even devices which you think of as fixed functionality like SD-Cards and just about every USB peripheral have become firmware driven. When you buy a USB-to-serial dedicated function chip today, you find inside a 8051-compatible core and a ROM.
As universal as FPGA are, they have also always been a niche device. The promise of FPGA everywhere somehow is always nearer and always further away and this has been this way for like 20 years.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@mariusvanc That was not true when nVidia was formed and rose to power. This is an NVidia story, not a GPU story.
Well there was an NVidia in the original Xbox, and it was a decent GPU stuck in a sub-par system, so difficult to say whether that or GeForce 3 was actually better, on paper the Xbox one was better, but it was probably bus-choked. But of course Ti 4200 would come out for PC a few months later, and it was no contest.
The second time when NVidia GPU was in a console was the PS3, and it was not good, just not good at all, the only saving grace is that with Cell, it doesn't have to do quite as much work, as a lot of T&L and post-processing could be moved to SPUs. ATI's chip that found its way into Xbox 360 on the other hand was truly the next gen stuff at the time of release, pretty exciting. But then Geforce 8800 came out and wiped the floor.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@weksauce Spatial reasoning bound image recognition is a very hard task, which requires by all reason a large neural network. The network needs to generalise from the data set. When the data set is too small for the size of the network, the network generally learns it verbatim instead or focuses on detection of secondary coincidental traits.
Problem, we don't have a great understanding and manual access into the function of neural networks. As to ad hoc, hardcoded solutions, they tend to be very difficult to integrate, except specifically using synthetic data. But i would agree a handcrafted solution should still be present as a failsafe running alongside neural network subsystems, specifically because we have such a mediocre grasp on them. But also the two systems should ideally mostly agree rather than fight, as that's a source of danger as well.
Synthetic data also helps solve the chicken and egg problem. Like yes ideally you'd have more real world data, and predominantly more diverse real world data with better corner case coverage. But the raw data we can get easily isn't marked up. You can train the markup on synthetic data, and human supervision and correction is orders of magnitude cheaper than manual markup from ground up. Then you can gradually increase the amount of real world data used in training.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Does ASUS still engineer its own products, or are they basically a marketing company, who contracts engineering out to Pegatron and others? What about ASRock, another Pegatron daughter supplying computer components?
I don't know about Taiwan being unable to cultivate global brands. Isn't Acer such a brand that just about everybody knows? Then again people buy an Acer because it's practical (inexpensive, accessible), not because they like the brand. They have spun off a brand which has a much better image with Benq, but also much less broad awareness. HTC, Gigabyte and MSI have also been successful global Taiwanese brands, but they are very niche brands, nothing of the likes of LG, Samsung or Xiaomi.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1