General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
LoneTech
Steve Lehto
comments
Comments by "LoneTech" (@0LoneTech) on "Lawfirm Loses $60K Relying on ChatGPT in the Courtroom" video.
It apologizes and attempts a different answer. If it involved something complex, it is very likely to make unrelated and incorrect changes as well.
2
By very professionally ignoring the wrapper explaining ChatGPT is unreliable. It's fine print like any other TOS or EULA, after all; only relevant when the big company decides to go after their customers, not for Very Important Lawyers.
2
It's not exactly lying because the concept of true or false is just as abstract to it as any other. It has only language flow, not reasoning, knowledge or self as you might expect. However, the producers do prioritize producing an output over making it accurate or relevant, and do share blame for hyping it up for things it remains horrible at. The very architecture of ChatGPT is incapable of "doing the research".
2
The fine print on signing up for Chat GPT say you're responsible for any use you make of its output and no veracity guarantee exists. You know, just like any other fine print, the absolute disclaimer is the first thing they put in. Then come the restrictions telling you you're not allowed to do things.
1
No, by design. That would impede the plagiarism.
1
The producers of chatgpt are promoting it for all uses, largely through training it to do the promotion, so they can pretend it wasn't their claims. Its primary function is automated plagiarism and its goal isn't to be accurate but to respond. Actual traceability would be counter to that primary function, so don't expect it to produce references reliably.
1
It doesn't really target novelty either. It simply goes with the flow, and if you're specific enough you just might end up with a chunk of unaltered training data. Which the producers of the LLM generally have no license for.
1
@1486230 While you have a point, you can still choose to make or demand a guarantee - as in some agreement of compensation if the results fail. With chatgpt or similar, you've already entered an agreement there will be no guarantees, and the system you're talking to is incapable of entering agreements.
1
It's worse. They're trained to recognize the phrase and attempt to circumvent it by lying to you (the company's lie, the bot doesn't have knowledge and doesn't know it's repeating falsehoods). The one you hit presumably claimed it didn't understand, the last one I had to struggle with claimed it couldn't redirect the call. Sometimes they add a filter to only connect you when you roar at them, you know, to ensure their actual employees get the worst conditions possible.
1
You're more likely to get a chunk of disclaimerese for the first and half an article about LLMs for the second. Not because either is accurate, but because that's what it was directed to do. In particular, it does not have a concept of self; "ChatGPT" or "you" is just as abstract a subject for it as "orange juice" or "yesterday".
1