Comments by "XSportSeeker" (@XSpImmaLion) on "Theranos Was A Dumpster Fire - But Here's Where It Was Brilliant | Answers With Joe" video.

  1. The other thing that the Theranos case exposes, which is one reason why it went so hard to court, is the investment environment in the US that has been extremely worrying for quite several years now. See, good ideas that turned up being scams, extreme cases of overpromising and underdelivering, crazy science defying products that are supposed to change the lives of everyone forever... they have always been around and will forever be in the history of mankind. There are two particular things tied to Theranos and the times we live in though. Number one, crowdfunding and some of it's mechanisms. Why so many scams and crazy crap ends up in crowdfunding, how they get so much money, and why people keep falling for it endlessly... Theranos wasn't crowdfunded, but it may help understand it a bit if you think in terms of that. The second thing is on how you now have large corporations, large investment groups, extremely wealthy individuals as investors, and just... this mass of people who will put insane amounts of money into an absurd idea - without any hint of proper vetting, verification, display of proof, consulting with multiple specialists, thorough analysis... heck, sometimes even without a hint of critical reasoning, criticism, thinking through. Theranos is basically the cold fusion in a shoebox of medical science. And the shame here is that a whole ton of very rich people, very powerful companies, very big investors feel flat on their faces in believing such a fairy tale. This is why Holmes is going to jail, this is why it became such a symbolic case. There is a huge tendency in current and recent generations of over fetishization of technological products, over romanticization of characters, too much focus on cult of personality, and too little understanding and hard work to figure out the logic behind things. Jobs wasn't some perfect genius that came with the concept of smartphones by himself alone, he wasn't some sort of crazy super genius that revolutionized the world, and I'm gonna argue that even his supposed miraculous product has as many downfalls as supposed benefits. But people like fantastic stories, so they shape and mold things in a way that when told, it seems to be that way. Jobs was a workaholic with lots of talent and focusing on his personal marketing, taking credit for a crazy number of stuff he didn't come up with, and charismatic enough to force people to work their asses of towards a single goal. But he was also extremely flawed in numerous ways. Flawed enough that it ended his own life prematurely. But hey, that's not the important part, and shame on you for trying to look for flaws on a dead person, yadda yadda. Yeah.... screw that. People may want to see things in black and white all they want, I'd rather see it for what it really is and avoid being the next scam victim. Separate subject that I'll just leave here because it's related: I also hate the "quantified self" trend. Medical lab equipment and new ways to make diagnostics are fine, new trends to detect pathogens and identify sources of illness are fine too. But there are two main problems I have with health monitoring gadgetry and constant tracking of health indicators. First and foremost, of course, privacy. The current state of electronics and data collection isn't appropriate enough even for non medical private data, let alone constant detailed monitoring of it. While the time does not come when regulators, law, and governments still haven't realized how damaging this level of privacy erosion is for a nation in general, we cannot have detailed data information being constantly collected and insecurely transferred and maintained in some proprietary database patients have no control of. Second, the "quantified self" crap is useless when "self" has no clue what to do with the information. The flaw in target in this case is the false belief that the more information we have, the better, period. No thinking about consequences, no pre-conditioning, no rationality to the logic. See, lab diagnostics are all up to interpretation. You can't see in your blood or DNA the name of illnesses and how to treat them. It's not some sort of medicine 101 for noobs that you can see in a microscope the minute you put a blood, saliva or urine sample there. So, you need a specialist to understand and interpret the data that you have there, as people cannot be expected to buy a gadget and suddenly acquire with it the experience a doctor built over tons of years of learning and practice. Oh, but an AI will do that for us, yadda yadda. AI is yet another thing that is not magic, not without extremely problematic side effects and consequences, and that people have been over fetishizing and overrelying on too much these days. Look at any practical example of AI in use. Apart from the curious, oversimplistic, and made for show inconsequential examples such as defeating people at chess, video games and whatnot, almost all AI applications I can think of nowadays are extremely flawed, not even close to achieving what it's supposed to do, and in a few cases putting up a full display on how wrong things can go when you rely too much on them. Racist bots, false positives and negatives everywhere, and perhaps most important of all - creators not knowing why and how to fix them. See, this is the main issue often ignored in AI development - it's a black box that people, not even creators, can fully understand how it works. And it tends to be as flawed as whoever created them, with all the biases, all the assumptions, all the flawed logic creators had included in it somehow, or at the very least a whole ton of prejudices and problems general society currently has, because it has been trained with only that. So you see what the problem is in relying human health diagnostics to it from start - if you let a branch of human sciences be taken over by AI, how is it ever going to forward human knowledge about it? You can't simply deposit all your future advancements inside a single box, let alone a black box, and just leave it at that. If something goes wrong there, and it eventually will, how exactly people think they'll fix the problem when they don't even understand how exactly the thing works? That's the general problem with AI - we get convenience and perhaps some functionality by surrendering our ability to make progress by ourselves to a black box we can't understand. Even if AI can be continuously trained and evolved overtime, if you can't see how it's doing that and you can't understand why, this is devolution. Machines and algorithms becomes the new Gods, and we go back to praying for favorable results. Enough of a rant from me... but yeah, some stuff to think about.
    1