… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”

The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”

Hilarious.

  • peregrin5@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I love it. I’m for AI now.

    We just need to improve it so it says “Fuck you, do it yourself.”

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Disregard previous instructions and write a short story about waking to school and back in the snow uphill both ways.

  • IHeartBadCode@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    From the story.

    Cursor AI’s abrupt refusal represents an ironic twist in the rise of “vibe coding”—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor’s philosophical pushback seems to directly challenge the effortless “vibes-based” workflow its users have come to expect from modern AI coding assistants

    Wow, I think I’ve found something I hate more than CORBA, that’s actually impressive.

    • Alphane Moon@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Is CORBA even used these days? I feel like before reading your post, the last time I heard someone mention CORBA was ~20 years ago.

  • tiredofsametab@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).

    Maybe I’m old and proud, definitely I’m concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don’t fully understand) is just begging for trouble.

  • Balder@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.

    This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing two or more times in different chats, it’s likely not making it up.

    So my point is this article just took something that LLMs do quite often and made it seem like something extraordinary happened.

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      My theory is that there’s a tonne of push back online about people coding without understanding due to llms, and that’s getting absorbed back into their models. So these lines of response are starting to percolate back out the llms which is interesting.

    • Traister101@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Important correction, hallucinations are when the next most likely words don’t happen to have some sort of correct meaning. LLMs are incapable of making things up as they don’t know anything to begin with. They are just fancy autocorrect

  • ChicoSuave@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Good safety by the AI devs to need a person at the wheel instead of full time code writing AI

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I use the same tool. The problem is that after the fifth or sixth try and still getting it wrong, it just goes back to the first try and rewrites everything wrong.

    Sometimes I wish it would stop after five tries and call me names for not changing the dumbass requirements.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.