• oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

      • Opinionhaver@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        pretending LLMs are AI

        LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

        However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.

          • Opinionhaver@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            It’s not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.

              • Opinionhaver@feddit.uk
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.

                • jenesaisquoi@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn’t.

                  A chess engine decides nothing. It understands nothing. It’s just an algorithm.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Here we go… Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s

          • Opinionhaver@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            How’s insulting the people respectfully disagreeing with you working out so far? That ad-hominem was completely uncalled for.

            • raspberriesareyummy@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              “Fanperson” is an insult now? Cry me a river, snowflake. Also, you weren’t disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.

      • SinningStromgald@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        There are at least three of us.

        I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Not sure if we need that particular bubble to pop for us to be drowned in a sea of shit, looking at the state of the world right now :( But silicon valley seems to be at the core of this clusterfuck, as if all the villains form there or flock there…

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.

      And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That undersells them slightly.

      LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

      LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.

      They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.

        • CarbonBasedNPU@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          They make shit up fucking constantly. If I have to google if the answer I was given was right I might as well cut out the middle man and just google it myself. If I can’t understand it at that point maybe ask the LLM to rephrase the answer.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            You missed the part where deep seek uses a separate inference engine to take the LLM output and reason through it to see if it makes sense.

            No it’s not perfect. But it isn’t just predicting text like how AI was a couple of years ago.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Why is AGI not in reach? What insight do you have on the matter than you can so confidently make an absolute statement like that?

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Billionaires are often referred to as dragons because they horde wealth. A Guillotine that could know the difference and decide to only harm billionaires would be a technological marvel.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Or, worse, they might actually have to hire enough people to actually do the job. Why hire 100 people with good work life balance, when you can hire 60 people that aren’t allowed to have lives or families.

      • Sundray@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        60 people workers that aren’t allowed to have lives or families

        I mean, that’s what the AI will be for…

        • TheFogan@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Exactly that’s where it should be doubled down… if their own estimates are correct… it’s only a 6 month expense. If they really believe they are about to open the key to basically eliminating the cost of millions of workers indefinately, wouldn’t throwing thousands of workers to accomplish it faster, lead to cost savings.

          Say if I wanted a machine that could make eggs indefinately forever… but to make it I had to put 100 eggs into it. why would I put one egg in a day for 8 months, instead of buying 100 eggs today.

        • jonne@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, suddenly they’ll go from 60 hour work weeks to 0 if the AI proponents are to be believed (which you shouldn’t).

          • Sundray@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            For real – ultimately it’s the dream of every billionaire to have a servile AI at their beck and call, while the rest of us can eat rocks and roam the wasteland fighting over gasoline.

    • Ech@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Perhaps this is what you mean, but it’s even worse than just unpaid hours for current employees. His implicit goal is to generate a slave-class of people (which is what actual AI would be) that he can make more of or delete at his whim, and eliminate to livelihoods of any current employees (besides him and other execs, of course).

  • ilinamorato@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m pretty sure the science says it’s more like 20-30. I know personally, if I try to work more than about 40-ish hours in a week, the time comes out of the following week without me even trying. A task that took two hours in a 45-hour “crunch” week will end up taking three when I don’t have to crunch. And if I keep up the crunch for too long, I start making a lot of mistakes.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Is Google in the cloning business? Because I could swear that’s Zack Freedman from the Youtube 3D printing channel. He even wears the heads-up display (Youtube Link). Sorry for being off-Topic but who cares about what tech CEOs say about AGI anyway?

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    They talk about AGI like it’s some kind of intrinsically benevolent messiah that is going to come along and free humanity of limitations rather than a product that is going to be monetised to make a few very rich people even richer

      • JasonDJ@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        What if the whole earth, itself, is like, one giant supercomputer, designed to answer the ultimate question, and it’s just been running for billions of years?

    • SkavarSharraddas@gehirneimer.de
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It’s a belief in Techno-Jesus that will solve all our problems so we don’t have to solve them ourselves (don’t need to do the uncomfortable things we don’t want to). Just like aliens, the singularity, etc.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Ironically the world is full of people who like to think about solutions to problems. But those in power won’t put them to solve those because it’s not part of the political game.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    You know it’s bad when I had to click all the way through to the body of the article to verify this isn’t a The Onion thing. Do we still have a “Not The Onion” space here?

  • Sundray@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    What a brilliant suggestion, no way an AI could have come up with that, executive jobs are safe forever!