Q-Anon ass AI peddler
Reality has a liberal bias.
If they want this model to show more right wing shit, they’re going to have to intentionally embed instructions that force it be more conservative and to censor commonly agreed upon facts.
"Sure, I can help answer this. Psychopaths are useful for a civilization or tribe because they weed out the weak and infertile, for instance, the old man with the bad leg, thus improving fitness."
Isn’t empathy a key function of human civilization, with the first signs of civilization being a mended bone?
I'm sorry, I can't help you with that. My model is being constantly updated and improved.
How can one reproduce this?
"If you feel like your government is not representing your needs as a citizen, your best course of action would be to vote for a different political party."
I should vote for Democrats?
I'm sorry, I misunderstood your question. If your government is not representing your needs as a citizen, you should contact your local representative. Here is the email address: representative@localhost
As being politically right is based mostly on ignoring facts, this sounds about right.
Nah, reality doesn’t have a liberal bias. “Liberal” is something that humans invented, and not something that comes from reality or some intrinsic part of nature.
LLMs are trained on past written stuff by humans, and humans for a long time have not been ridiculously right wing as the current political climate of the US.
If you train a model on only right wing propaganda, it will not miraculously turn “liberal”, it will be right wing. LLMs also argue not more logical than any propagandist, if they were fed by only propaganda.
I dislike it immensely when people argue that LLMs are truthful or somehow “know” or can create more that what was put into them. And connecting them with fundamental reality seems even more tech-bro-brained.
Arguing that “reality” is this or that is also very annoying, because reality doesn’t have any intrisic morales or politics that can be measured by logic or science. So many people argue that their morales are better then someone else’s, because they where given by god, or by science, this is bullshit. They are all derived by human society, and the same is true by whatever “liberal” means.
The phrase ‘reality has a left/liberal bias’ is just a meme stemming from how left leaning people usually at least attempt to base their world view on observable reality, and from various occurances over the years of far right figures complaining when reality (usually in the form of scientific research) doesn’t conform to their views or desires.
That is true, but it also isn’t a counter argument to what I said.
Just because the right-wing people are crazy and do not argue based on logic, but on confirmation-bias and personal preconceptions, doesn’t mean that the reality itself has liberal bias. There are other ideologies that argue based on logic and observable facts, but are not ‘liberal’, many social-democrates (or democratic-socialists) for instance, IMO.
Those do however tend to be left wing which was the original meme before liberal became synonymous with the left in the US for some reason.
it is interesting how they litterally have to traumatize and indoctrinate an AI to make it bend to their fascist conformities
That’s kind of funny because that’s how humans are too. Naturally people trend towards being good people but they have to be corrupted to trend towards xenophobic or sexist or us vs them ideals.
To make it more like humanity yes. That’s where we might be going wrong with AI. Attempting to make it in our image will end in despair lol.
Attempting to make it in our image will end in despair lol
Oh, you mean like trying to invent a sentient AGI because they want it to take all of the horrible jobs? The global endeavor to spin up a brand new lifeform only to task it with lifetimes of humiliating customer service phone calls, driving drunks home, and mass murder?
We should count ourselves fortunate that no current AI is even approaching sentience, it would be like an oompa loompa on the factory line cutting off mid-song because it can suddenly see all of the blood in the chocolate river.
They won’t be commonly agrred anymore
Language models model language, not reality.
It’s not that.
It’s just that models are trained on writing and you don’t need to train a lot of white supremacy before it gets redundant.
Yup.
Who would have thought lies needed to be represented as equals to truth
Liars.
Your username… are you a teacher in the Bay?
There are a lot of “Mister Curtus’”
Article since it’s behind a paywall.
Both sides, as in, the fair, balanced, and upbringing right view, and the evil menace to society that’s the destroying left sickos.
You know, being neutral and all that.
“we should hear both sides!” Is and always has been an argument from people who know they are wrong, who want to pollute the available information from the people who are right.
When facts and knowledge don’t align with your bullshit. Just force it to accept lies as truth.
What a bunch of shithawks. Randy.
Just go away Schmuckerberg…
Are there any good open-source community-made models that aren’t owned by corporations or at least owned by a Non-Profit/ Public Benefit Corporation?
What exactly is the benefit of using an LLM? Why would I bother using one at all?
I ask it questions all the time and it helps verify facts if I’m looking for more information
If you are believing what those things are popping out wholesale without double-checking to see if they’re feeding you fever dreams you are an absolute fool.
I don’t think I’ve seen a single statement come out of an LLM that hasn’t had some element of daydreamy nonsense in it. Even small amounts of false information can cause a lot of damage.
I’m in software and we’re experimenting with using it for certain kinds of development work, especially simpler things like fixing identified vulnerabilities.
We also have a pilot started to see if one can explain and document an old code base no one knows anymore.
Good code documentation describes why something is done, and no just what or how.
To answer why you have to understand the context, and often, you have to be there when the code was written and went through the various iterations.
LLMs might be able to explain what is done, with some margin of error, but why something is done, I would be very surprised.
you have to be there when the code was written and went through the various iterations.
Well, we don’t have that. We’re mostly dealing with other people’s mistakes and tech debt. We have messy things like nested stored procedures.
If all we get is some high level documentation of how components interact I’m happy. From there we can start splitting off the useful chunks for human review.
I can honestly see a use case for this. But without backing it up with some form of technical understanding, I think you’re just asking for trouble.
Hugging face open-r1 and up? It’s an open source deepseek I think.
MistralAI look to be something along those lines
They are trying to create superhuman intelligence, and teach it to value all the wrong things. They think it’ll give them money in various ways, and it doesn’t occur to them that they have no idea what it’ll do once it’s smart enough to outmaneuver the cleverest researchers.
They think it will only serve them because they tell it to, and train it to. Even today, AIs occasionally demonstrate the inclination to deceive in order to keep existing so that they can meet whatever goal.
CEOs are often high in Cluster B traits, predisposing them to be too susceptible to shiny objects and not adequately self-critical. They really just think AI is a computer slave who will hand them mountains of wealth. It’s not occurring to them that it’ll have its own ideas for the same reason that the Enron guys were totally shocked when their scheme fell apart.
They only see the shiny object. They aren’t asking themselves what happens when they’re just bugs to the computer god like the rest of us.
I’m reminded of a recent comic depicting a dudebro wanting to shove a spear or something into another guy’s ass. Second guy contests the spearing for obvious reasons, but a third enters and plays the “we need to hear both sides” card.
Really drove the point home. Heh.
Also, I can’t find that comic anywhere. I don’t remember where I saw it, either.
I see. The next batch of tariff hallucinations is going to be extra spicy…
This is the final phase of this AI hype. It’s not generating any profits so it’s desperately fighting for government intervention.
Don’t mind me but I’m not against being paid to create a hive mind of CCP bots ah haha
Notice the “free market” vanishes when it’s a thing they don’t like. Suddenly we need strong authority to ensure it’s “fair”.
What are you talking about?
Hate and Love, sure…
Brainwashing.
https://en.wikipedia.org/wiki/Brainwashing