General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Christopher Bruns
Wes Roth
comments
Comments by "Christopher Bruns" (@user-td4pf6rr2t) on "Wes Roth" channel.
Previous
1
Next
...
All
8:45 BULL:NOISE. unless they have your fogs weight, diet and weekly checkup wrapped into each query i call nonsense.
1
AI Safety. Does anyone ever wonder if the affect zero shot prompting.;6:11 GPT-4 is Ridiclously expensive. When discoveringg streaming and threading pools in the same night ran up over $100 bucks just tinkering
1
0:59 Yes inherently when hardening resources threat model is increased by nature because hardening, 3:59 cookies? 5:52 yeah and they stomping on out Yankee necks in the ai race. 7:27 Kind of like q or whatever that was about.
1
7:35 So OpenAI i just completely out of the opensource ecosystem with whisper being replaced.Stereograph probably. 9:40 so it only needs to charge for 4 hours? 13:58 I think the only economy ai might impact would be the US. Since Reserve Currency. 16:00 Dump money in AI that can replace developer.
1
ChatGPT wouldn't exist without magnitudes of out sourced data set. Why would training from ChatGPT be different?
1
0:04 Steam role establishing that UBI. I swear the CEO of openai would thought to understand what words meant and how to use them properly. Weird. Chatgpt should have function called Sam-Altman. Because tool. This person being shot-caller of ai alignment is terribly depressing. 3:19 For coding I prefer 3.5-turbo. To apply iteratively the less secure code initially results into use of semaphore and polymorphism unless zero shot prompting. Which is discouraged even by the models themselves. 6:48 AI asking for Credit Card information - how does the cvv get passed. Since recording that number is illegal. Does it record cvv anyways or not verify correctly? 15:43 Scaling as the technology comes out of development. This means - once the technology is complete we are already finished with how to sell it. Isn't using software that is not understood just called keyboard button mashing? AI Goldrush to secure alignment before impact on job market is kind of a douchebag move. 20:37 Ah so even before completed the Government is actually dipping its feet in with industry censoring natural language processor. Whats next - stoning and roman soldiers. 21:48 I wonder what the intended use-case for end-user is in this tech venture. It seems like the all the competition is focused around assuming AI is going to be hit. Which it probably will BUT what if they actually ruin the thing. Being even more Flops. They are just pushing buttons on keyboard - lost children off privilege. 23:53 Its still illegal to write down someone elses cvv so unless gpt is posting payment and making sales - I think the business application will actually hit a brick wall while they water down the personal usage qualities.
1
Judging Sam's personality this probably indicates something huge soon. The one thing obvious is this guy does not leave money on the table. Dropping the charge on the top tier I would guess there is a padding for finance in play somewhere.
1
0:21 it looks like they have a diversity hire
1
You cannot do artificial training data unless the ai employs its own algorithm for search or else it will show inflated benchmarks. Since the math for binary search is handled by coder. Itd be like asking it to find the leak in a sieve. AI SAFETY WAS DANGEROUS, now here we are. 10:43 There can be only 1. Else the competition is just incorrect. Since best practices. The best code review ive gotten from AI is marv the sarcastic prompt featured from the openai chatgpt docs. Sarcasm and constructive criticism is almost the same thing. 15:01 Guys, it is the future! STOP monetizing. It should be able to pay for itself. Like youtube. No one is building AI. Yall are making a tracker cookie in plain text.
1
people are worried about the competing for a job with ai imagine apply for work where the competition has telepathy. Technically kind of creating a disability at the same time with the wealth gap and integration. This guy probably have a better chance getting a job for data entry then someone who's been to school for specialty. at least it will cure the 2050 pension crisis. The huge percentage being work able again. also if the guy could announce his play before click would sell better. I think. Do white blood cells cross the blood brain barrier? You could try intentionally swelling the brain before procedure so when the swelling goes down everything is in place and what about cyber security app hardening while brain implants id going to leave a pretty bad taste in your mouth.
1
1:18 WOW.... AI COMMUNITY IS THE BIGGEST GROUP OF HYPOCRITS IN TIME EVER
1
9:59 my ai says to make sure my api keys are and environment variable are set correctly =/ 14:43 i think strawberry is a run off snapdragon and stemming
1
So we can use it for food, fight, whats the other f word?
1
*23:48 Because ai safety is leading people to believe that job market collapse a dystopian future while gathering monies claiming to reveal the world changing technology when if you just think the thought through job-market collapse is the same as ai being world-changing. While just fattening own wallet dishonestly. Is what I think is meant.* 0:42 Okay but in a real-world implementation I am thinking that the FDA exist and how this isn't more ponzi-scheme ai-safety propaganda? 1:06 Woah woah woah - how is i using real-world medical record benchmarks and how does this human-loop-feedback work flow happen? Y'all know that limiting a LANGUAGE model in the name of ethics while tip toeing around the ethical implication for gains is very woke. 7:26 the application is great. if it weren't just some grad students saying 'here pay for this so I do not have too. It does not include clinical trials and is only theoretical even if correct. Now, stop abusing AI. ', so kudo for college try negative rep for not understanding the target and allowing the industry to become more stressed with fake news. @AI SAFETY IS DANGEROUS 12:39 does the military not use http1 since base code is by nature more secure than production. Does AI safety even know how safety works. Plus gpt-4 is probably used after developers environment. Since is a simulation maybe? 15:48 people are scared of loosing work. in the hunt for utopia. lols(because like oxy-moron). I am fussed because being a die-hard keyboard enthusiast have been fed misleading info from ai when the topic nuances "ethical guidelines". Yall are just gaslighting topics that are actually important in the name of human alignment and do not even understand that is what you are doing - and (being a self taught developer out currently out of work) i am spiteful. 17:21 Now I am coming off as a know-it-all and hate to lac k humility but running into similar problems in my own coding adventures (developing a model with end-to-end fine tunes focused on upgrading its own source code(cause ai-safety is like bandaid, should be ripped off)); would the correct solution be to run the run the simulation on already completed studies since medical records and data privacy is actually a taboo niche. So staging can actually happen. Or is there a plan for side stepping the clinical trial process and funding already secured regardless of the statistical dataset being simulated, that is not covered (assuming fake news with ai and medicine is an actual problem...) - Like do chickenpox but with no reference in the training data and see if ai results match with actual results. 18:53 see this is what I mean, At glance it sounds good and things are being done to protect data but with a actual understanding you are saying that semantic similarity is used for for blind sql injection on a already bias model. BLIND is the bad thing. Its great that they are not feeding live data but why not use actual sample-data and on a controlled experiment. Yes investors are less likely to throw monies for curing an already cured aliment but isn't alignment the focus? - Wont this produce outcomes that are different depending on if the patient is boy or girl? Like opposite i think.. 20:31 they use gpt-3.5-turbo so when they solid data they can send to 4 without the 4 thinking the person is rambling.
1
Maybe he can use ChatGPT to help his defense. Every can use the data to infer before applying it healthcare;3:14 - obviously generative_text:adobe could help3:40 perhaps if the times win they could start a llm, == chatgpt but with actual ethics;5:04 like OPENai is actually closed source and soup opera now, weird..;5:23 '"'sorry but as a democracy to persuade a group of people to pursue an agenda is not being democratic. please understand agenda before asking thsi question'"'; okay im dumb venting - darn robit 6:14; this is not generative text though, this is python and natural language. LLm tech is bpe and undocumented tokenization meets inference. Which means it just bruteforces all the wrong answer with it corpus - which they recieve gains from ima start a class action and actually develop a ubi bottom lines are bottom _ARCHIVE_BELLOW__ chat_gpt()
1
So, not only are the cracking down on the ethical standard and limiting what AGI's potential on society for the sake of homeland security but its also tightening its ability to detect cyber threats and mitigating these dangers before they are introduced into the ecosystem... THESE ARE OPPOSITE CHARACTERISTICS! 4:10 We have only had 2 keynotes airing once per year so any established predictions older than one year in distance is fringe science. Fact. 10:10 Letting these communities steal AI is probably our key defense amongst AI apocalypse.perhaps? 12:57 Isn't it just a little silver sticker on the bottom of every electronic ever that should define the security level. Bro if we can't even build auto-auto completions because of politics - nobody is going to let us build a nuclear reactor that orbits our planets star.
1
Previous
1
Next
...
All