Personally I love how they found the AI could be very persuasive by lying.
why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.
I mean, the joke is that AI doesn’t tell you things that are meaningfully true, but rather is a machine for guessing next words to a standard of utility. And yes, lying is a good way to arbitrarily persuade people, especially if you’re unmoored to any social relation with them.
The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.
One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.
Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.
Just a few months ago it was literally Meta itself…
Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.
You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.
Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable
The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all
I don’t remember that subreddit
I remember a meme, but not a whole subreddit
When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.
Not since the APIcalypse at least.
Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.
One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep.
Another laughable quote after the APIcalypse, at least for the people that remained on Reddit after being totally ok with being betrayed.
Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject
Not my looking like a psychopath to my husband deleting my long time google account to set up a burner (because i cant even use maps/tap to pay without one).
I’m tired of being tracked. Being on lemmy I’ve gotten multiple ideas to help negate these apps/tracking models. I am ever greatful. Theres stil so much more I need to learn/do however.
Honestly this is why I think people should be forced to have their face as a profile picture on any forum. I want to know if I’m arguing with a edgy 14 yr old or a 50 yr old man and it would stop so much hate honestly.
Realistic AI generated faces have been available for longer than realistic AI generated conversation ability.
Meh. Believe none of what you hear and very little of what you can see
Unless a person is in front of you, don’t assume anything is real online. I mean it. Nothing online cannot be faked, nothing online HASNT been faked.
The least trustworthy place in the universe. Is the internet.
Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation
Lol, coming from the people who sold all of your data with no consent for AI research
The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology
Added to idcaboutprivacy (which is open source). If there are any other similar links, feel free to add them or send them my way.
as opposed to thousands of bots used by russia everyday on politics related subs.
On all subs.
propaganda matters.
Yes. Much more than we peasants all realized.
Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…
Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.
Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.
They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.
There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.
Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.
Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.
Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.
The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).
100% agree though.
Or somebody else is doing the manipulation and is successfully putting the blame on Russia.
Those of us who are not idiots have known this for a long time.
They beat the USA without firing a shot.
You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.
Please elaborate. I would love to understand this from black mirror but I don’t get it.
I think it’s a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it’s only getting worse. Avoiding the research because it’s embarrassing just prolongs and deepens the problem
What a bunch of fear mongering, anti science idiots.
You think it’s anti science to want complete disclosure when you as a person are being experimented on?
What kind of backwards thinking is that?
Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable. This was posting on social media, not injecting people with random pathogens. Have a little perspective.
You do realize the ends do not justify the means?
You do realize that MANY people on social media have emotional and mental situations occuring and that these experiments can have ramifications that cannot be traced?
This is just a small reason why this is so damn unethical
In that case, any interaction would be unethical. How do you know that I don’t have an intense fear of the words “justify the means”? You could have just doomed me to a downward spiral ending in my demise. As if I didn’t have enough trouble. You not only made me see it, you tricked me into typing it.
you are being beyond silly.
in no way is what you just posited true . unsuspecting nd non malicious social faux pas are in no way equal to Intentionally secretive manipulation used to garner data from unsuspecting people
that was an embarrassingly bad attempt to defend an indefensible position, and one no-one would blame you for deleting and re-trying
Well, you are trying embarrassingly hard to silence me at least. That is fine. I was definitely positing an unlikely but possible case, I do suffer from extreme anxiety and what sets it off has nothing to do with logic, but you are also overstating the ethics violation by suggesting that any harm they could cause is real or significant in a way that wouldn’t happen with regular interaction on random forums.
ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.
It could, if it annoumced itself as such.
Instead it pretended to be a rape victim and offered “its own experience”.
Blaming a language model for lying is like charging a deer with jaywalking.
Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.
the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include
Which, in an ideal world, is why AI generated comments should be labeled.
I always break when I see a deer at the side of the road.
(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)
I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.
That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?
I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.
If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.
I think when posting on a forum/message board it’s assumed you’re talking to other people
That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.
LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.
For sure, thus why I said it’s a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.
The research in the OP is a good first step in figuring out how to solve the problem.
That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.