those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.
That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.
Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.
Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”
AI and ads… I think that is the next dystopia to come.
Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.
Or all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners.
Tell me more about these beans
Drink verification can
that is not a thought i needed in my brain just as i was trying to sleep.
what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks
“Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”
edit: how does this make you feel
It makes me wish my government actually fucking governed and didn’t just agree with whatever businesses told them
That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.
Would you like to order a delicious, refreshing Coke Zero™️?
I can see how targeted ads like that would be overwhelming. Would you like me to sign you up for a free 7-day trial of BetterHelp?
Your fear of constant data collection and targeted advertising is valid and draining. Take back your privacy with this code for 30% off Nord VPN.
It depends: are you in Soviet Russia ?
In the US, so as of 1/20/25, sadly yes.
This makes a lot of sense because as we have been seeing over the last decades or so is that digital only socialization isn’t a replacement for in person socialization. Increased social media usage shows increased loneliness not a decrease. It makes sense that something even more fake like ChatGPT would make it worse.
I don’t want to sound like a luddite but overly relying on digital communications for all interactions is a poor substitute for in person interactions. I know I have to prioritize seeing people in the real world because I work from home and spending time on Lemmy during the day doesn’t fulfill.
In person socialization? Is that like VR chat?
That is peak clickbait, bravo.
People addicted to tech omg who could’ve guessed. Shocked I tell you.
I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.
When your job is to shovel out garbage, because that is specifically required from you and not shoveling out garbage is causing you trouble, then you are more than reasonable to let the machine take care of it for you.
lmao we’re so fucked :D
Correlation does not equal causation.
You have to be a little off to WANT to interact with ChatGPT that much in the first place.
I don’t understand what people even use it for.
I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn’t like something I do, too bad. The genius AI knows better, and I only care about what it has to say.
Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.
Definitely needs to be cross referenced and fact checked as the image processing or general response aren’t always perfect. It’ll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It’s a start, and it hasn’t been a straight progress path as nothing is.
I use it to generate a little function in a programming language I don’t know so that I can kickstart what I need to look for.
I use it many times a day for coding and solving technical issues. But I don’t recognize what the article talks about at all. There’s nothing affective about my conversations, other than the fact that using typical human expression (like “thank you”) seems to increase the chances of good responses. Which is not surprising since it better matches the patterns that you want to evoke in the training data.
That said, yeah of course I become “addicted” to it and have a harder time coping without it, because it’s part of my workflow just like Google. How well would anybody be able to do things in tech or even life in general without a search engine? ChatGPT is just a refinement of that.
There’s a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.
I can only imagine there are others who aren’t as stringent with the output.
Heck, my primary use for a bit was custom text adventure games, but ChatGPT has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There’s probably ways to prompt engineer around these limitations, but a) there’s other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I’d like most of the time), so eventually I stopped.
Did like the huge flexibility v. the parser available in most made by human text adventures, though.
I am so happy God made me a Luddite
Yeah look at all this technology you can’t use! It’s so empowering.
Can, and opt not to. Big difference. I’m sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own
Same with many aspects of modern technology. Like, I’m sure it’s very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else’s computer turns off, I’d like to keep control over my things
Not a lot of meat on this article, but yeah, I think it’s pretty obvious that those who seek automated tools to define their own thoughts and feelings become dependent. If one is so incapable of mapping out ones thoughts and putting them to written word, its natural they’d seek ease and comfort with the “good enough” (fucking shitty as hell) output of a bot.
I mainly use it for corporate wankery messages. The output is bullshit and I kinda wonder how many of my co-workers genuinely believe in it and how many see the bullshit.
What the fuck is vibe coding… Whatever it is I hate it already.
Its when you give the wheel to someone less qualified than Jesus: Generative AI
Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup “Eureka Labs - we are building a new kind of school that is AI native”) make a tweet defining the term:
There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
People ignore the “It’s not too bad for throwaway weekend projects”, and try to use this style of coding to create “production-grade” code… Lets just say it’s not going well.
source (xcancel link)
The amount of damage a newbie programmer without a tight leash can do to a code base/product is immense. Once something is out in production, that is something you have to deal with forever. That temporary fix they push is going to be still used in a decade and if you break it, now you have to explain to the customer why the thing that’s been working for them for years is gone and what you plan to do to remedy the situation.
A newbie without a leash just pushing whatever an AI hands them into production. O, boy, are senior programmers going to be sad for a long, long time.
Using AI to hack together code without truly understanding what your doing
Hung
I know I am but what are you?
Well TIL thx for the info been using it wrong for years
Hunged
Most hung
Hungrambed
There is something I don’t understand… openAI collaborates in research that probes how awful its own product is?
If I believed that they were sincerely interested in trying to improve their product, then that would make sense. You can only improve yourself if you understand how your failings affect others.
I suspect however that Saltman will use it to come up with some superficial bullshit about how their new 6.x model now has a 90% reduction in addiction rates; you can’t measure anything, it’s more about the feel, and that’s why it costs twice as much as any other model.
Isn’t the movie ‘Her’ based on this premise?
Yes, but what this movie failed to anticipate was the visceral anger I feel when I hear that stupid AI generated voice. I’ve seen too many fake videos or straight up scams using it that I now instinctively mistrust any voice that sounds like male or femaleAI.wav.
Could never fall in love with AI voice, would always assume it was sent to steal my data so some kid can steal my identify.
I thought the voice in Her was customized to individual preference. Which I know is hardly relevant.
It was also a true AI wasn’t it? It ran locally and was never turned off, so conversations with it were private and it continued to “exist” and develop by itself.
I tried that Replika app before AI was trendy and immediately picked on the fact that AI companion thing is literal garbage.
I may not like how my friends act but I still respect them as people so there is no way I’ll fall this low and desperate.
Maybe about time we listen to that internet wisdom about touching some grass!
I tried that Replika app before AI was trendy
Same here, it was unbelievably shallow. Everything I liked it just mimiced, without even trying to do a simulation of a real conversation. “Oh you like cucumbers? Me too! I also like electronic music, of course. Do you want some nudes?”
Even when I’m at my loneliest, I still prefer to be lonely than have a “conversation” with something like this. I really don’t understand how some people can have relationships with an AI.
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.