muah hua haaaaa
ggppjj
- 0 Posts
- 43 Comments
It’s very distinct. I wouldn’t call it vomit, but it’s a lingering chemical-ey taste.
I think I have a mutation in a taste bud or something, but Sucralose is really a prominent and nasty taste to me in anything it’s in. Really frustrating to taste it in otherwise sugary drinks, like some of the Monster flavors.
For reference, I also think cilantro tastes like soap. No clue if that’s an indicator or not.
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoA crucial part of your statement is that it knows that it’s untrue, which it is incapable of. I would agree with you if it were actually capable of understanding.
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoA false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.
A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green. No statistics, I’ve done this intentionally and the only outcome of my decision to act was that I spoke a falsehood.
AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it’s likely that LLMs never will be.
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoAsk chatgpt, I’m done arguing effective consciousness vs actual consciousness.
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoWhat do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoWe did, a long time ago. It’s called an encyclopedia.
If humans can’t be trusted to only provide facts, how can we be trusted to make a machine that only provides facts? How do we deal with disputed truths? Grey areas?
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoIt is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.
It apologizes because math says that the most likely response is to apologize.
Edit: you can just ask it y’all
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoI strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.
ggppjj@lemmy.worldto Technology@lemmy.world•Judges Are Fed up With Lawyers Using AI That Hallucinate Court CasesEnglish0·3 months agoI think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.
I think it’s more convenient to their overall design of modern Windows, IIRC by default it’ll install the running version of Windows to a hypervisor also. For their purposes, for the majority of users, there would be little to no performance losses.
ggppjj@lemmy.worldto Technology@lemmy.world•How One AI Startup Founder Cornered Microsoft Into Finally Taking Down Explicit Videos of HerEnglish0·3 months agoThat was where it was uploaded first, the takedowns were for it being later hosted on azure cloud services
He’s funny when he cares, he stopped caring about family guy a looooooooooooooooooong time ago. Eh, whatever.
ggppjj@lemmy.worldto Lemmy Shitpost@lemmy.world•Thanks Duo, I don't know how I'm supposed to feel about a coquettish green owl-unicornEnglish0·4 months agoIts doing what it was made to do, be abnormal enough to cause people to diagetically share their brand.
Gotta be up there. Over 6 million at least if I had to guess.
ggppjj@lemmy.worldto homeassistant@lemmy.world•The era of open voice assistants has arrived - Home AssistantEnglish0·6 months agoSomebody else linked this mobo replacement for the home minis, haven’t looked very closely at it yet myself: https://github.com/justLV/onju-voice
Where I’m from, we strung 'em up by the bura’zak-ka.
INSPIRED
bythatsamefrenchexcellence