• Owl@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The cursed Will Smith eating spaghetti wasn’t the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it’s not quite as incredible as that viral video would suggest

      • wischi@programming.dev
        cake
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        But wouldn’t you point still be true today that the best AI video models today would be the onces that are not available for consumers?

    • wischi@programming.dev
      cake
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      There actually isn’t really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actually human intelligence.

    • moseschrute@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Hot take, today’s AI videos are cursed. Bring back will smith spaghetti. Those were the good old days

  • markstos@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    This weekend I successfully used Claude to add three features in a Rust utility I had wanted for a couple years. I had opened issue requests, but no else volunteered. I had tried learning Rust, Wayland and GTK to do it myself, but the docs at the time weren’t great and the learning curve was steep. But Claude figured it all out pretty quick.

        • coherent_domain@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          This is interesting, I would be quite impressed if this PR got merged without additional changes.

          I am genuinely curious and no judgement at all, since you mentioned that you are not a rust/GTK expert, are you able to read and and have a decent understanding of the output code?

          For example, in the sway.rs file, you uncommented a piece of code in get_all_windows function, do you know why it is uncommented?

          • markstos@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            This is interesting, I would be quite impressed if this PR got merged without additional changes.

            We’ll see. Whether it gets merged in any form, it’s still a big win for me because I finally was able to get some changes implemented that I had been wanting for a couple years.

            are you able to read and and have a decent understanding of the output code?

            Yes. I know other coding languages and CSS. Sometimes Claude generated code that was correct but I thought it was awkward or poor, so I had it revise. For example, I wanted to handle a boolean case and it added three booleans and a function for that. I said no, you can use a single boolean for all that. Another time it duplicated a bunch of code for the single and multi-monitor cases and I had it consolidate it.

            In one case, It got stuck debugging and I was able to help isolate where the error was through testing. Once I suggested where to look harder, it was able to find a subtle issue that I couldn’t spot myself. The labels were appearing far too small at one point, but I couldn’t see that Claude had changed any code that should affect the label size. It turned out two data structures hadn’t been merged correctly, so that default values weren’t getting overridden correctly. It was the sort of issue I could see a human dev introducing on the first pass.

            do you know why it is uncommented?

            Yes, that’s the fix for supporting floating windows. The author reported that previously there was a problem with the z-index of the labels on these windows, so that’s apparently why it was implemented but commented out. But it seems due to other changes, that problem no longer exists. I was able to test that labels on floating windows now work correctly.

            Through the process, I also became more familiar with Rust tooling and Rust itself.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      llms are systems that output human-readable natural language answers, not true answers

    • zurohki@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        It doesn’t ‘know’ anything. It is glorified text autocomplete.

        The current AI is intelligent like how Hoverboards hover.

            • capybara@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              You could claim that it knows the pattern of how references are formatted, depending on what you mean by the word know. Therefore, 100% uninteresting discussion of semantics.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.

                There remains to this day strong philosophical debate on how we can be sure we really “know” anything at all, and thought experiments such as the Chinese Room illustrate that “knowing” is far, far more complex than we might believe.

                For instance, is it simply following a set path like river in a gorge? Is it ever actually “considering” anything, or just doing what it’s told?

                • capybara@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word “know” most definitely didn’t give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you’d like instead of understanding the usage in the context is very much semantics.

        • malin@thelemmy.club
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Insulting, but also correct. What “knowing” something even means has a long philosophical history.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Trying to treat the discussion as a philisophical one is giving more nuance to ‘knowing’ than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…

                Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.

                • snooggums@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant.

                  That is not what I said. In fact, it is the opposite of what I said.

                  I said that treating the discussion of LLMs as a philosophical one is giving ‘knowing’ in the discussion of LLMs more nuance than it deserves.

        • Oniononon@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Llms are the smartest thing ever on subjects you have no fucking clue on. On subjects you have at least 1 year experience with it suddenly becomes the dumbest shit youve ever seen.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

    LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

        • GreenMartian@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          They have been pretty good on popular technologies like python & web development.

          I tried to do Kotlin for Android, and they kept tripping over themselves; it’s hilarious and frustrating at the same time.

          • doktormerlin@feddit.org
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            I use ChatGPT for Go programming all the time and it rarely has problems, I think Go is more niche than Kotlin

            • Opisek@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              I get a bit frustrated at it trying to replicate everyone else’s code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go’s standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Not sure what you mean, boilerplate code is one of the things AI is good at.

            Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.

            More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.

        • Maalus@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I recently tried it for scripting simple things in python for a game. Yaknow, change char’s color if they are targetted. It output a shitton of word salad and code about my specific use case in the specific scripting jargon for the game.

          It all based on “Misc.changeHue(player)”. A function that doesn’t exist and never has, because the game is unable to color other mobs / players like that for scripting.

          Anything I tried with AI ends up the same way. Broken code in 10 lines of a script, halucinations and bullshit spewed as the absolute truth. Anything out of the ordinary is met with “yes this can totally be done, this is how” and “how” doesn’t work, and after sifting forums / asking devs you find out “sadly that’s impossible” or “we dont actually use cpython so libraries don’t work like that” etc.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.

            LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.

            • Maalus@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              No, a human would just find an API that is publically available. And the fact that it knew the static class “Misc” means it knows the api. It just halucinated and responded with bullcrap. The entire concept can be summarized with “I want to color a player’s model in GAME using python and SCRIPTING ENGINE”.

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            It’s possible the library you’re using doesn’t have enough training data attached to it.

            I use AI with python for hundreds line data engineering tasks and it nails it frequently.

        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I tried, it can’t get trough four lines without messing up. Unless I give it tasks that ate so stupendously simple that I’m faster typing them myself while watching tv

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Four lines? Let’s have realistic discussions, you’re just intentionally arguing in bad faith or extremely bad at prompting AI.

            • Boomkop3@reddthat.com
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              You can prove your point easily: show us a prompt that gives us a decent amount of code that isn’t stupidly simple or sufficiently common that I don’t just copy paste the first google result

              • Sl00k@programming.dev
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                I have nothing to prove to you if you wish to keep doing everything by hand that’s fine.

                But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn’t the case is just arguing in bad faith or you don’t work in the industry.

        • wischi@programming.dev
          cake
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Play ASCII tic tac toe against 4o a few times. A model that can’t even draw a tic tac toe game consistently shouldn’t write production code.

    • iAvicenna@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I am on you with this one. It is also very helpful in argument heavy libraries like plotly. If I ask a simple question like “in plotly how do I do this and that to the xaxis” etc it generally gives correct answers, saving me having to do internet research for 5-10 minutes or read documentations for functions with 1000 inputs. I even managed to get it to render a simple scene of cloud of points with some interactivity in 3js after about 30 minutes of back and forth. Not knowing much javascript, that would take me at least a couple hours. So yeah it can be useful as an assistant to someone who already knows coding (so the person can vet and debug the code).

      Though if you weigh pros and cons of how LLMs are used (tons of fake internet garbage, tons of energy used, very convincing disinformation bots), I am not convinced benefits are worth the damages.

    • wischi@programming.dev
      cake
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Practically all LLMs aren’t good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four years old niece and I wouldn’t trust her writing production code 🤣

      Once a single model (doesn’t have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

      Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn’t compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

      I don’t say that AI (especially AGI) can’t replace humans. It definitely can and will, it’s just a matter of time, but state of the Art LLMs are basically just extremely good “search engines” or interactive versions of “stack overflow” but not good enough to do real “thinking tasks”.

      • MonkeMischief@lemmy.today
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        extremely good “search engines” or interactive versions of “stack overflow”

        Which is such a decent use of them! I’ve used it on my own hardware a few times just to say “Hey give me a comparison of these things”, or “How would I write a function that does this?” Or “Please explain this more simply…more simply…more simply…”

        I see it as a search engine that connects nodes of concepts together, basically.

        And it’s great for that. And it’s impressive!

        But all the hype monkeys out there are trying to pedestal it like some kind of techno-super-intelligence, completely ignoring what it is good for in favor of “It’ll replace all human coders” fever dreams.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Cherry picking the things it doesn’t do well is fine, but you shouldn’t ignore the fact that it DOES do some things easily also.

        Like all tools, use them for what they’re good at.

        • wischi@programming.dev
          cake
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I don’t think it’s cherry picking. Why would I trust a tool with way more complex logic, when it can’t even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don’t have the capability to plan and think when they lose tic tac toe games.

            • wischi@programming.dev
              cake
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              A drill press (or the inventors) don’t claim that it can do that, but LLMs claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligent, call them “reasoning” models, but still fail to beat my niece in tic tac toe, which by the way doesn’t have a PhD in anything 🤣

              LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

              If they can’t handle things they even saw during training (but sparsely, like tic tac toe) it wouldn’t be able to produce code you should use in production. I wouldn’t trust any junior dev that doesn’t set their O right next to the two Xs.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

                I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

                • wischi@programming.dev
                  cake
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  Totally agree with that and I don’t think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That’s why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

              • wischi@programming.dev
                cake
                link
                fedilink
                arrow-up
                0
                ·
                2 months ago

                I can’t speak for Lemmy but I’m personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it’s good for. But “thinking” is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

                It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can’t do.

                Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

                • Zexks@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  “I’m not again LLMs I just never say anything useful about them and constantly point out how I can’t use them.” The other guy is right and you just prove his point.

    • Opisek@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Perhaps 5 LOC. Maybe 3. And even then I’ll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        My guess is what’s going on is there’s tons of psuedo code out there that looks like it’s a real language but has functions that don’t exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don’t realize things but just pattern match very complex patterns).

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Laugh it up while you can.

    We’re in the “haha it can’t draw hands!” phase of coding.

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      someone drank the koolaid.

      LLMs will never code for two reasons.

      one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.

      software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.

      two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.

      the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.

      so yeah. we’re about 100 years from the whole “it can’t draw its hands” stage because it doesn’t even know what hands are.

      • chunes@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        This is just your ego talking. You can’t stand the idea that a computer could be better than you at something you devoted your life to. You’re not special. Coding is not special. It happened to artists, chess players, etc. It’ll happen to us too.

        I’ll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

        • GreenKnight23@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          you’re a fool. chess has rules and is boxed into those rules. of course it’s prime for AI.

          art is subjective, I don’t see the appeal personally, but I’m more of a baroque or renaissance fan.

          I doubt you will but if you believe in what you say then this will only prove you right and me wrong.

          what is this?

          1000001583

          once you classify it, why did you classify it that way? is it because you personally have one? did you have to rule out what it isn’t before you could identify what it could be? did you compare it to other instances of similar subjects?

          now, try to classify it as someone who doesn’t have these. someone who has never seen one before. someone who hasn’t any idea what it could be used for. how would you identify what it is? how it’s used? are there more than one?

          now, how does AI classify it? does it comprehend what it is, even though it lacks a physical body? can it understand what it’s used for? how it feels to have one?

          my point is, AI is at least 100 years away from instinctively knowing what a hand is. I doubt you had to even think about it and your brain automatically identified it as a hand, the most basic and fundamentally important features of being a human.

          if AI cannot even instinctively identify a hand as a hand, it’s not possible for it to write software, because writing is based on human cognition and is entirely driven on instinct.

          like a master sculptor, we carve out the words from the ether to perform tasks that not only are required, but unseen requirements that lay beneath the surface that are only known through nuance. just like the sculptor that has to follow the veins within the marble.

          the AI you know today cannot do that, and frankly the hardware of today can’t even support AI in achieving that goal, and it never will because of people like you promoting a half baked toy as a tool to replace nuanced human skills. only for this toy to poison pill the only training data available, that’s been created through nuanced human skills.

          I’ll just add, I may be an internet rando to you but you and your source are just randos to me. I’m speaking from my personal experience in writing software for over 25 years along with cleaning up all this AI code bullshit for at least two years.

          AI cannot code. AI writes regurgitated facsimiles of software based on it’s limited dataset. it’s impossible for it to make decisions based on human nuance and can only make calculated assumptions based on the available dataset.

          I don’t know how much clearer I have to be at how limited AI is.

        • wischi@programming.dev
          cake
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Coding isn’t special you are right, but it’s a thinking task and LLMs (including reasoning models) don’t know how think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn’t learn to think from that. That’s why LLMs can’t replace humans.

          That does certainly not mean that software can’t be smarter than humans. It will and it’s just a matter of time, but to get there we likely have AGI first.

          To show you that LLMs can’t think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though the entire Wikipedia article on how xxo works, that it’s a solved game, different strategies and how to consistently draw - but still it can’t do it. It loses most games against my four year old niece and she doesn’t even play good/perfect xxo.

          I wouldn’t trust anything, which is claimed to do thinking tasks, that can’t even beat my niece in xxo, with writing firmware for cars or airplanes.

          LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can’t think. For now, but likely we’ll need different architectures for real thinking models than LLMs have.

    • Soleos@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      AI bad. But also, video AI started with will Will Smith eating spaghetti just a couple years ago.

      We keep talking about AI doing complex tasks right now and it’s limitations, then extrapolating its development linearly. It’s not linear and it’s not in one direction. It’s a exponential and rhizomatic process. Humans always over-estimate (ignoring hard limits) and under-estimate (thinking linearly) how these things go. With rocketships, with internet/social media, and now with AI.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’ve used it extensively, almost $100 in credits, and generally it could one shot everything I threw at it. However: I gave it architectural instructions and told it to use test driven development and what test suite to use. Without the tests yeah it wouldn’t work, and a decent amount of the time is cleaning up mistakes the tests caught. The same can be said for humans, though.

    • Lyra_Lycan@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      How can it pass if it hasn’t had lessons… Well said. Ooh I wonder if lecture footage would be able to teach AI, or audio in from tutors…

  • sturger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Honest question: I haven’t used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don’t mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn’t change them, only the correct versions.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Okay, I realize I’m that person, but for those interested:

      tree, cat and sed get the job done nicely.

      And… it’s my nap time, now. Please keep the Internet working, while I’m napping. I have grown fond of parts of it. Goodnight.

    • lapping6596@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I use pycharm for this and in general it does a great job. At work we’ve got some massive repos and it’ll handle it fine.

      The “find” tab shows where it’ll make changes and you can click “don’t change anything in this directory”

      • setVeryLoud(true);@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Yes, all of JetBrains’ tools handle project-wide renames practically perfectly, even in weirder things like Angular projects where templates may reference variables.

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Not reliably, no. Python is too dynamic to do that kind of thing without solving general program equivalence which is undecidable.

      Use a static language, problem solved.

    • LeroyJenkins@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      most IDEs are pretty decent at it if you configure them correctly. I used intelliJ and it knows the difference. use the refactor feature and it’ll crawl references, not just rename all instances.

    • killabeezio@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Itellij is actually pretty good at this. Besides that, cursor or windsurf should be able to. I was using cursor for a while and when I needed to reactor something, it was pretty good at picking that up. It kept crashing on me though, so I am now trying windsurf and some other options. I am missing the auto complete features in cursor though as I would use this all the time to fill out boilerplate stuff as I write.

      The one key difference in cursor and windsurf when compared to other products is that it will look at the entire context again for any changes or at least a little bit of it. You make a change, it looks if it needs to make changes elsewhere.

      I still don’t trust AI to do much though, but it’s an excellent helper

      • sturger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah, I’m looking for something that would understand the operation (? insert correct term here) of the language well enough to rename intelligently.

    • Derpgon@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      IntelliJ IDEA, if it knows it is the same variable, it will rename it. Usually works in a non fucked up codebase that uses eval or some obscure constructs like saving a variable name into a variable as a string and dynamically invoking it.

    • trolololol@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I’m going to laugh in Java, where this has always been possible and reliable. Not like ai reliable, but expert reliable. Because of static types.

    • petey@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It needs good feedback. Agentic systems like Roo Code and Claude Code run compilers and tests until it works (just gotta make sure to tell it to leave the tests alone)

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      And that’s what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it’s imitating, but with zero understanding of why the original looked that way.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I mean, there’s about a billion ways it’s been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That’s how I know I and other humans have understanding, after all.

        What it’s not is aligned to care about anything other than making plausible-looking text.

        • Jtotheb@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

          Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

          And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

            You got the “originality” part there, right? I’m talking about tasks that never came close to being in the training data. Would you like me to link some of the research?

            Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

            Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It’s true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.

            • borari@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              It’s true that one is based on continuous floats and the other is dynamic peaks

              Can you please explain what you’re trying to say here?

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                0
                ·
                2 months ago

                Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there’s no notion of time, it’s not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it’s dynamic; they can peak at any time and downstream neurons can begin to fire “early”.

                They do seem to be equivalent in some way, although AFAIK it’s unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.

                • borari@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

                  In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

                  Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

                  I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                0
                ·
                2 months ago

                I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that’s not good enough, it’s easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you’re more interested in ignoring any empirical evidence, though.

                • Jtotheb@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

  • LanguageIsCool@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we’ll get there!

      …but it won’t be that impressive once we remember concepts like “monkey, typing, Shakespeare” were already embedded in the training data.

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    All programs can be written with on less line of code. All programs have at least one bug.

    By the logical consequences of these axioms every program can be reduced to one line of code - that doesn’t work.

    One day AI will get there.

    • gmtom@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      All programs can be written with on less line of code. All programs have at least one bug.

      The humble “Hello world” would like a word.

      • phx@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        You can fit an awful lot of Perl into one line too if you minimize it. It’ll be completely unreadable to most anyone, but it’ll run

      • Amberskin@europe.pub
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Just to boast my old timer credentials.

        There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

        It has just one assembly code instruction: a BR 14, which means basically ‘return’.

        The first version was bugged and IBM had to issue a PTF (patch) to fix it.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Okay, you can’t just drop that bombshell without elaborating. What sort of bug could exist in a program which contains a single return instruction?!?

          • Amberskin@europe.pub
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            It didn’t clear the return code. In mainframe jobs, successful executions are expected to return zero (in the machine R15 register).

            So in this case fixing the bug required to add an instruction instead of removing one.

        • Rose@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Reminds me of how in some old Unix system, /bin/true was a shell script.

          …well, if it needs to just be a program that returns 0, that’s a reasonable thing to do. An empty shell script returns 0.

          Of course, since this was an old proprietary Unix system, the shell script had a giant header comment that said this is proprietary information and if you disclose this the lawyers will come at ya like a ton of bricks. …never mind that this was a program that literally does nothing.

  • coherent_domain@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The image is taken from Zhihu, a Chinese Quora-like site.

    The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages.

    But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.

    I thought this is a good task for llm because eslint config is very common, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour…

    • petey@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I used Claude 3.7 to upgrade my eslint configs to flat and upgrade from v7 to v9 with Roo Code and it did it perfectly

    • Lucy :3@feddit.org
      cake
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of “use printf for output” -> “there’s no printf with bare metal as target” -> “use solution X” -> “doesn’t work” -> “ude printf for output” … I had enough.

      • qqq@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        QEMU makes it pretty painless to hook up gdb just FYI; you should look into that. I think you can also have it provide a memory mapped UART for I/O which you can use with newlib to get printf debugging

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah you can tell it just ratholes on trying to force one concept to work rather than realizing it’s not the correct concept to begin with

        • formulaBonk@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s exactly what most junior devs do when stuck. They rehash the same solution over and over and it almost seems like that llms trained on code bases infer that behavior from commit histories etc.

          It almost feels like on of those “we taught him these tasks incorrectly as a joke” scenarios

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I used ChatGPT to help me make a package with SUSE’s Open Build Service. It was actually quite good. Was pulling my hair out for a while until I noticed that the project I wanted to build had changes URLs and I was using an outdated one.

        In the end I just had to get one last detail right. And then my ChatGPT 4 allowance dried up and they dropped me back down to 3 and it couldn’t do anything. So I had to use my own brain, ugh.

        • noctivius@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          chatgpt is worse among biggest chatbots with writing codes. From my experience Deepseek > Perplexity > Gemini > Claude.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It’s pretty random in terms of what is or isn’t doable.

      For me it’s a big performance booster because I genuinely suck at coding and don’t do too much complex stuff. As a “clean up my syntax” and a “what am I missing here” tool it helps, or at least helps in figuring out what I’m doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn’t do without one (or at least without getting berated by some online dick who doesn’t think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

      How much of a benefit it is for a professional I couldn’t tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

      I don’t think it’s useless, but if you ask it to do something by itself you can’t trust that it’ll work without singificant additional effort.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It’s not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Yeah, the AI corpos are putting a lot of effort into parsing big contexts right now. I suspect because they think (probably correctly) that coding is one of the few areas where they could get paid if their AIs didn’t have the memory of a goldfish.

          And absolutely agreed that making sure the FOSS alternatives keep pace is going to be important. I’m less concerned about hating the entire concept than I am about making sure they don’t figure out a way to keep every marginally useful application behind a corporate ecosystem walled garden exclusively.

          We’ve been relatively lucky in that the combination of PR brownie points and general crappiness of the commercial products has kept an incentive to provide a degree of access, but I have zero question that the moment one of these things actually makes money they’ll enshittify the freely available alternatives they control and clamp down as much as possible.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Sorta kinda. It depends on where you put that line. I think because online drama is fun when we got to the “vibe coding” name we moved to the assumption that all AI assistance is vibe coding, but in practice there’s the percentage of what you do that you know how to do, the percentage you vibe code because you can’t figure it out yourself and the percentage you just can’t do without researching because the LLM can’t do it effectively or the stuff it can do is too crappy to use as part of something else.

          I think if the assumption is you should just “git gud” and not take advantage of that grey zone where you can sooort of figure it out by asking an AI instead of going down a Google rabbit hole then the performative AI hate is setting itself up for defeat, because there’s a whole bunch of skill ranges where that is actually helpful for some stuff.

          If you want to deny that there’s a difference between that and just making code soup by asking a language model to build you entire pieces of software… well, then you’re going to be obviously wrong and a bunch of AI bros are going to point at the obvious way you’re wrong and use that to pretend you’re wrong about the whole thing.

          This is basic online disinformation playbook stuff and I may suck at coding, but I know a thing or two about that. People with progressive ideas should get good at beating those one of these days, because that’s a bad outcome.

          • jcg@halubilo.social
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            People seem to disagree but I like this. This is AI code used responsibly. You’re using it to do more, without outsourcing all your work to it and you’re actively still trying to learn as you go. You may not be “good at coding” right now but with that mindset you’ll progress fast.

      • wise_pancake@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

        It’s very useful for throwaway work like writing scripts and automations.

        It’s useful not but a 10x multiplier like all the CEOs claim it is.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Fully agreed. Everybody is betting it’ll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn’t any guarantee that it’ll get to where the corpos are assuming it already is.

          Which is not the same as not having better autocomplete/spellcheck/“hey, how do I format this specific thing” tools.

          • jcg@halubilo.social
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.

          • wise_pancake@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Yeah, it’s still super useful.

            I think the execs want to see dev salaries go to zero, but these tools make more sense as an accelerator, like giving an accountant excel.

            I get a bit more done faster, that’s a solid value proposition.

    • TrickDacy@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I wouldn’t say it’s accurate that this was a “mechanical” upgrade, having done it a few times. They even have a migration tool which you’d think could fully do the upgrade but out of the probably 4-5 projects I’ve upgraded, the migration tool always produced a config that errored and needed several obscure manual changes to get working. All that to say it seems like a particularly bad candidate for llms

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        No, still “perfect” for llms. There’s nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it

      • coherent_domain@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Then I am quite confused what LLM is supposed to help me with. I am not a programmer, and I am certainly not a TypeScript programmer. This is why I postponed my eslint upgrade for half a year, since I don’t have a lot of experience in TypeScript, besides one project in my college webdev class.

        So if I can sit down for a couple hour to port my rather simple eslint config, which arguably is the most mechanical task I have seen in my limited programming experience, and LLM produce anything close to correct. Then I am rather confused what “real programmers” would use it for…

        People here say boilerplate code, but honestly I don’t quite recall the last time I need to write a lot of boilerplate code.

        I have also tried to use llm to debug SELinux and docker container on my homelab; unfortunately, it is absolutely useless in that as well.

        • TrickDacy@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?

          People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It’s just a time saver. I have never used it like “write me a script or a function that does x” like some people do. I am not interested in that as it seems like a sad crutch that I’d need to customize so much anyway that I may as well skip that step.

          Having said that, I’m noticing the copilot autocomplete seems to be getting worst over time. I’m not sure why it worsening, but if it ever feels not worth it anymore I’ll drop it, no harm no foul. The binary thinkers tend to think you’re either a good dev who despises all forms of AI or you’re an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.

          PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would’ve done all or almost all the work for you… It fully didn’t work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.

          • coherent_domain@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Sorry, the language my original post might seem confrontational, but that is not my intension; I m trying to find value in LLM, since people are excited for it.

            I am not a professional programmer nor do I program any industrial sized project at the moment. I am a computer scientist, and my current research project do not involve much programming. But I do teach programming to undergrad and master students, so I want to understand what is a good usecase for this technology, and when can I expect it to be helpful.

            Indeed, I am frustrated by this technology, and that might shifted my language further than I intended to. When everyone is promoting this as a magically helpful tool for CS and math, yet I fail to see any good applications for either in my work, despite going back to it every couple month or so.


            I did try @eslint/migrate-config, unfortunately it added a good amount of bloat and ends up not working.

            So I just gived up and read the doc.

            • TrickDacy@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              Gotcha. No worries. I figured you were coming in good faith but wasn’t certain. Who is pushing llm’s for programming that hard? In my bubble, which often includes Lemmy, most people HATE them for all uses. I get that tech bros and linked in crazies probably push this tech for coding a lot, but outside of that, most devs I know IRL either are lukewarm or dislike llm’s for dev work.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I use it sometimes, usually just to create boilerplate. Actual functionality it’s hit or miss, and often it ends up taking more time to fix than to write myself.

  • Drunk & Root@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    cant wait to see “we use AI agents to generate well structured non-functioning code” with off centered everything and non working embeds on the website