Comments by "Christopher Bruns" (@user-td4pf6rr2t) on "Wes Roth" channel.

  1. 1
  2. 1
  3. 1
  4. 1
  5. 1
  6. 0:04 Steam role establishing that UBI. I swear the CEO of openai would thought to understand what words meant and how to use them properly. Weird. Chatgpt should have function called Sam-Altman. Because tool. This person being shot-caller of ai alignment is terribly depressing. 3:19 For coding I prefer 3.5-turbo. To apply iteratively the less secure code initially results into use of semaphore and polymorphism unless zero shot prompting. Which is discouraged even by the models themselves. 6:48 AI asking for Credit Card information - how does the cvv get passed. Since recording that number is illegal. Does it record cvv anyways or not verify correctly? 15:43 Scaling as the technology comes out of development. This means - once the technology is complete we are already finished with how to sell it. Isn't using software that is not understood just called keyboard button mashing? AI Goldrush to secure alignment before impact on job market is kind of a douchebag move. 20:37 Ah so even before completed the Government is actually dipping its feet in with industry censoring natural language processor. Whats next - stoning and roman soldiers. 21:48 I wonder what the intended use-case for end-user is in this tech venture. It seems like the all the competition is focused around assuming AI is going to be hit. Which it probably will BUT what if they actually ruin the thing. Being even more Flops. They are just pushing buttons on keyboard - lost children off privilege. 23:53 Its still illegal to write down someone elses cvv so unless gpt is posting payment and making sales - I think the business application will actually hit a brick wall while they water down the personal usage qualities.
    1
  7. 1
  8. 1
  9. 1
  10. 1
  11. 1
  12. 1
  13. 1
  14. *23:48 Because ai safety is leading people to believe that job market collapse a dystopian future while gathering monies claiming to reveal the world changing technology when if you just think the thought through job-market collapse is the same as ai being world-changing. While just fattening own wallet dishonestly. Is what I think is meant.* 0:42 Okay but in a real-world implementation I am thinking that the FDA exist and how this isn't more ponzi-scheme ai-safety propaganda? 1:06 Woah woah woah - how is i using real-world medical record benchmarks and how does this human-loop-feedback work flow happen? Y'all know that limiting a LANGUAGE model in the name of ethics while tip toeing around the ethical implication for gains is very woke. 7:26 the application is great. if it weren't just some grad students saying 'here pay for this so I do not have too. It does not include clinical trials and is only theoretical even if correct. Now, stop abusing AI. ', so kudo for college try negative rep for not understanding the target and allowing the industry to become more stressed with fake news. @AI SAFETY IS DANGEROUS 12:39 does the military not use http1 since base code is by nature more secure than production. Does AI safety even know how safety works. Plus gpt-4 is probably used after developers environment. Since is a simulation maybe? 15:48 people are scared of loosing work. in the hunt for utopia. lols(because like oxy-moron). I am fussed because being a die-hard keyboard enthusiast have been fed misleading info from ai when the topic nuances "ethical guidelines". Yall are just gaslighting topics that are actually important in the name of human alignment and do not even understand that is what you are doing - and (being a self taught developer out currently out of work) i am spiteful. 17:21 Now I am coming off as a know-it-all and hate to lac k humility but running into similar problems in my own coding adventures (developing a model with end-to-end fine tunes focused on upgrading its own source code(cause ai-safety is like bandaid, should be ripped off)); would the correct solution be to run the run the simulation on already completed studies since medical records and data privacy is actually a taboo niche. So staging can actually happen. Or is there a plan for side stepping the clinical trial process and funding already secured regardless of the statistical dataset being simulated, that is not covered (assuming fake news with ai and medicine is an actual problem...) - Like do chickenpox but with no reference in the training data and see if ai results match with actual results. 18:53 see this is what I mean, At glance it sounds good and things are being done to protect data but with a actual understanding you are saying that semantic similarity is used for for blind sql injection on a already bias model. BLIND is the bad thing. Its great that they are not feeding live data but why not use actual sample-data and on a controlled experiment. Yes investors are less likely to throw monies for curing an already cured aliment but isn't alignment the focus? - Wont this produce outcomes that are different depending on if the patient is boy or girl? Like opposite i think.. 20:31 they use gpt-3.5-turbo so when they solid data they can send to 4 without the 4 thinking the person is rambling.
    1
  15. 1
  16. 1