• ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    That’s about right. I’ve been using LLMs to automate a lot of cruft work from my dev job daily, it’s like having a knowledgeable intern who sometimes impresses you with their knowledge but need a lot of guidance.

    • eldavi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      watch out; i learned the hard way in an interview that i do this so much that i can no longer create terraform & ansible playbooks from scratch.

      even a basic api call from scratch was difficult to remember and i’m sure i looked like a hack to them since they treated me as such.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I mean, interviews have always been hell for me (often with multiple rounds of leetcode) so there’s nothing new there for me lol

        • eldavi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          6 months ago

          Same here but this one was especially painful since it was the closest match with my experience I’ve ever encountered in 20ish years and now I know that they will never give me the time of day again and; based on my experience in silicon valley; may end up on a thier blacklist permanently.

          • ipkpjersi@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Blacklists are heavily overrated and exaggerated, I’d say there’s no chance you’re on a blacklist. Hell, if you interview with them 3 years later, it’s entirely possible they have no clue who you are and end up hiring you - I’ve had literally that exact scenario happen.

            The only way you’d end up on a blacklist is if you accidentally step on the owners dog or something like that.

            • eldavi@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              Being on the other side of the interviewing table for the last 20ish years and being told that we’re not going to hire people that everyone unanimously loved and we unquestionably needed more times that I want to remember makes me think that blacklists are common.

              In all of the cases I’ve experienced in the last decade or so: people who had faang and old silicon on their resumes but couldn’t do basic things like creating an ansible playbook from scratch were either an automatic addition to that list or at least the butt of a joke that pervades the company’s cool aide drinker culture for years afterwards; especially so in recruiting.

              Yes they’ll eventually forget and I think it’s proportional to how egregious or how close to home your perceived misrepresentation is to them.

              • ipkpjersi@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                6 months ago

                I think I’ve probably only ever been blacklisted once in my entire career, and it’s because I looked up the reviews of a company I applied to and they had some very concerning stuff so I just ghosted them completely and never answered their calls after we had already begun to play a bit of phone tag prior to that trying to arrange an interview.

                In my defense, they took a good while to reply to my application and they never sent any emails just phone calls, which it’s like, come on I’m a developer you know I don’t want to sit on the phone all day like I’m a sales person or something, send an email to schedule an interview like every other company instead of just spamming phone calls lol

                Agreed though, eventually they will forget, it just needs enough time, and maybe you’d not even want to work there.

      • orgrinrt@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        In addition, there have been these studies released (not so sure how well established, so take this with a grain of salt) lately, indicating a correlation with increased perceived efficiency/productivity, but also a strongly linked decrease in actual efficiency/productivity, when using LLMs for dev work.

        After some initial excitement, I’ve dialed back using them to zero, and my contributions have been on the increase. I think it just feels good to spitball, which translates to heightened sense of excitement while working. But it’s really just much faster and convenient to do the boring stuff with snippets and templates etc, if not as exciting. We’ve been doing pair programming lately with humans, and while that’s slower and less efficient too, seems to contribute towards rise in quality and less problems in code review later, while also providing the spitballing side. In a much better format, I think, too, though I guess that’s subjective.

  • Doug7070@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Mr. Torvalds is truly a generous man, giving the current AI market an analysis of 10% usefulness is probably a decimal or two more than will end up panning out once the hype bubble pops.

  • kitnaht@lemmy.worldBanned
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Honestly, he’s wrong though.

    I know tons of full stack developers who use AI to GREATLY speed up their workflow. I’ve used AI image generators to put something I wanted into the concept stage before I paid an artist to do the work with the revisions I wanted that I couldn’t get AI to produce properly.

    And first and foremost, they’re a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it. They are terrible at deducing things themselves, because they can’t ‘think’, or coming up with solutions that others haven’t already - but so long as people are aware of those limitations, then they’re a pretty good tool to have.

    It’s a reactionary opinion when people jump to the ‘but they’re stealing art!’ – isn’t your brain also stealing art when it’s inspired by others art? Artists don’t just POOF, and have the capability to be artists. They learn slowly over time, using others as inspiration or as training to improve. That’s all stable diffusors do - just a lot faster.

    • AreaKode@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      AI can give me a blueprint for my logic. Then I, as a developer, make the code run. Cuts my scripting time in half.

      • Wrench@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Rofl. As a developer of nearly 20 years, lol.

        I used copilot until finally getting fed up last week and turning it off. It was a net negative to my productivity.

        Sure, when you’re doing repetitive operations that are mostly copy paste and changing names, it’s pretty decent. It can save dozens of seconds, maybe even a minute or two. That’s great and a welcome assist, even if I have to correct minor things around 50% of the time.

        But when an error slips through and I end up spending 20 minutes tracking down the problem later, all that saved time vanishes.

        And then the other times where my IDE is frozen because the plugin is stuck in some loop and eating every last resource and I spend the next 20 minutes cursing and killing processes, manually looking for recent updates that hadn’t yet triggered update notifications, etc… well, now we’re in the red, AND I’m pissed off.

        So no, AI is not some huge boon to developer productivity. Maybe it’s more useful to junior developers in the short term, but I have definitely dealt with more than a few problems that seem to derive from juniors taking AI answers and not understanding the details enough to catch the problems it introduced. And if juniors frequently rely on AI without gaining deep understanding, we’re going to have worse and worse engineers as a result.

    • antonim@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      they’re a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it

      This is what I’ve seen many people claim. But it is a weak compliment for AI, and more of a criticism of the current web search engines. Why is that information unavailable to search engines, but is available to LLMs? If someone has put in the work to find and feed the quality content to LLMs, why couldn’t that same effort have been invested in Google Search?

      • kitnaht@lemmy.worldBanned
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        If someone has put in the work to find and feed the quality content to LLMs, why couldn’t that same effort have been invested in Google Search?

        I’d rather a world where 10 companies can compete with google search with AIs, than where they dump money into a monopoly.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          If you don’t feel like discussing this and won’t do anything more than deliberately miss the point, you don’t have to reply to me at all.

          • kitnaht@lemmy.worldBanned
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            6 months ago

            The content is not unavailable to search engines. AI LLMs simply are better at surfacing it. I don’t know what point you were trying to make that I missed, it wasn’t on purpose, I assure you.

            • antonim@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              AI LLMs simply are better at surfacing it

              Ok, but how exactly? Is there some magical emergent property of LLMs that guides them to filter out the garbage from the quality content?

              • kitnaht@lemmy.worldBanned
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                Yeah. Money. Google has an incentive to make search results less accurate to get you to click around and interact with more ads. As it currently stands, AI models aren’t inserting advertisements; though I suspect that’s only a matter of time.

                • antonim@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 months ago

                  And that’s more or less what I was aiming for, so we’re back at square one. What you wrote is in line with my first comment:

                  it is a weak compliment for AI, and more of a criticism of the current web search engines

                  The point is that there isn’t something that makes AI inherently superior to ordinary search engines. (Personally I haven’t found AI to be superior at all, but that’s a different topic.) The difference in quality is mainly a consequence of some corporate fuckery to wring out more money from the investors and/or advertisers and/or users at the given moment. AI is good (according to you) just because search engines suck.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      ah yes it’s reactionary to checks notes not support the righteous biggest bubble since dotcom era

      you okay out there bud?

      • kitnaht@lemmy.worldBanned
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        You might want to look up the definition of reactionary. Because that’s…exactly what it means. To oppose reform/advancements.

        You okay there bud?

        In political science, a reactionary or a reactionist is a person who holds political views that favor a return to the status quo ante—the previous political state of society—which the person believes possessed positive characteristics that are absent from contemporary society.

        Congratulations – Currently you and 18 others are not smarter than an average high schooler.

          • kitnaht@lemmy.worldBanned
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            6 months ago

            You’ve got a pretty high bar of proof for proving “actual fraud”…

            You can’t provably say that this is a “bubble” as claimed. The tools do what they purport to do. Where’s the fraud?

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              6 months ago

              It’s not remotely within the realm of plausibility that Sam Altman genuinely believes any of the horseshit he spews. (And that’s ignoring that they gained their funding by lying about the core intent of their organization by pretending to be serving the public interest and not profiteering.)

              • kitnaht@lemmy.worldBanned
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                6 months ago

                It’s not remotely within the realm of plausibility that Sam Altman genuinely believes any of the horseshit he spews.

                Welcome to earth. That’s basically every business ever, and you’ll quite literally never be able to prove that in court; which is the litmus test for this claim.

    • DacoTaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      He isnt wrong. This comes from somebody who technically uses ai daily to help develop ( github copilot in visual studio to assist in code prediction based on the code base of the solution ), but AI is marketed even worse than blockchain back in 2017. Its everywhere, in every product, even if it doesnt have ai or has nothing to do with it. Monitor ai shit? Mouse with ai? Hell, ive seen a sketch of a fucking toaster with ‘ai’.
      There is shit like microsoft recall, apple intelligence, bing co pilot, office co pilot, …
      All of those are just… Nothing special or useful. There are also chatbots which bring nothing new to the table either.
      Everyone and everything wants to market there stuff with ai and its disgusting.
      Does that mean that current ai tech cant bring anything to the table? No, it totally can, but 90% of ai stuff out there is, just like linus says, marketing bullshit.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Speaking as someone who worked on AI, and is a fervent (local) AI enthusiast… it’s 90% marketing and hype, at least.

      These things are tools, they spit out tons of garbage, they basically can’t be used for anything where the output could likely be confidently wrong, and the way they’re trained is still morally dubious at best. And the corporate API business model of “stifle innovation so we can hold our monopoly then squeeze users” is hellish.

      As you pointed out, generative AI is a fantastic tool, but it is a TOOL, that needs some massive changes and improvements, wrapped up in hype that gives it a bad name… I drank some of the kool-aid too when llama 1 came out, but you have to look at the market and see how much fud and nonsense is flying around.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

        For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

        Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wrong? Hype. 100% hype.

        It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.

    • li10@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      How’s he wrong?

      Did you actually listen to what he said or are you just reading the headline and making it fit another narrative to respond to?

      Because he also said he thinks it’s going to change the world, he just hates the marketing BS that’s overhyping it.

      Probably because, as anyone who’s actually used AI knows, it has some core weaknesses. But the marketers are happy to gloss over that lie and just say that it will be able to do nearly anything.

      He said it’s interesting, but to give it five years to see how it’s actually useful, which is probably the most sane take you can have about AI imo.

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        It will be interesting when the bubble pops, because that’s probably when we’ll see the useful things it is actually good at

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Which is how new technologies tend to go see what sticks after exploring what is possible. So it shouldn’t be surprising that ai is goong through the motions, but it is getting annoying how fast it is ruining functioning systems by being jammed in with no guardrails.

        • athairmor@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          But, it also means we get Sam Altman as the next Elon Musk if he cashes in before the pop. And whatever other tech bros do the same. More filthy-rich men with the emotional maturity of a 12 year old.

        • Semperverus@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Summarizing documents, writing documents you don’t want to (within reason), and… whatever the hell Neuro-sama is doing on Vedal’s channel, are like the only ones i’ve found so far that kind of work. And I guess image generation.

          • GetOffMyLan@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            It’s amazingly good at moderating user content to flag for moderator review. Existing text analysis completely falls down beyond keyword filtering tbh.

            It’s really good at sentiment analysis. Which is great for things like user reviews. The Amazon ai notes on products are actually brilliant at summarizing the pros and cons of a product. I work for a holiday let company and we experimented with using it to find customers we need to follow up with and the results were amazing.

            It smashes other automated translating services as well.

            I use it a lot as a programmer to very quickly learn new topics. Also as an interactive docs that you can ask follow up questions to. I can pick up a new language as I go much faster than with traditional resources.

            It’s honestly a complete game changer.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              It’s honestly a complete game changer.

              It is, both in good and bad ways. The problem, as Linus and others here are pointing out, is that marketing pushes the good and downplays/ignores the bad, so there’s going to be a rough adjustment period as people eventually see through the BS and find the issues, and the longer that takes, the harder things will crash.

              There are plenty of good uses of modern AI approaches, they’re just far fewer than the ones being marketed these days.

            • Telorand@reddthat.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              The one place where I sincerely hope it takes root and succeeds is in medicine. Having better drugs, helping to identify potential problems or diseases, identifying health patterns (all with human review and proper trials, naturally)…

              It’s not even close to the magical AGI that tech bros are promising, but it is good at digesting data, and science and medicine are full of that. Plus, given how overworked doctors and nurses can be, having a preliminary analysis from a computer that doesn’t get tired or overworked seems like it would probably help with accurate diagnosis.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    As a fervent AI enthusiast, I disagree.

    …I’d say it’s 97% hype and marketing.

    It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are explicitly saying what they’re going to do, and that anyone buys it. TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

    Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        Current LLMs cannot be AGI, no matter how big they are. The fundamental architecture just isn’t right.

        • billwashere@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          You’re absolutely right. LLMs are good at faking language and sometimes not even great at that. Not sure why I got downvoted but oh well. But AGI will be game changing if it happens.

      • Naz@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Based on what I’ve witnessed so far, people will play with their AGI units for a bit and then put them down to continue scrolling memes.

        Which means it is neither awesome, nor world-ending, but just boring/business as usual.

      • Damage@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.

        • billwashere@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Very likely. But 4 years ago I would have said we weren’t close to what these LLMs can do now so who knows.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.

        We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.

    • just_an_average_joe@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      The saddest part is, this is going to cause yet another AI winter. The first few ones were caused by genuine over-enthusiasm but this one is purely fuelled by greed.

      • sploosh@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        The AI ecosystem is flooded, we need a good bubble pop to slow down the massive waste of resources that our current info-remix-based-on-what-you-will-likely-react-positively-to shit-tier AI represents.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Seriously, I’d love to be enthusiastic about it because it’s genuinely cool what you can do with math.

      But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it’s pretty much impossible.

      And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.

    • paddirn@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        It’s useful.

        I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

        It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        I receive alerts when people are outside my house, using security cameras, Blue Iris, CodeProject AI, Node-RED and Home Assistant, using a Google Coral for local AI. Entirely local - no cloud services apart from Google’s notification system to get notifications to my phone while I’m not home (which most Android apps use). That’s a good use case for AI since it avoids false positives that occur with regular motion detection.

        • WalnutLum@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I’ve been curious about google coral, but their memory is so tiny I’m not sure what kinds of models you can run on them

    • Damage@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

      What’s the source for that? It sounds hilarious

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html

        When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.

        TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.

    • KSP Atlas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      After getting my head around the basics of the way LLMs work I thought “people rely on this for information?”, the model seems ok for tasks like summarisation though

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

        Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

        In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        the model seems ok for tasks like summarisation though

        That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        It’s good for coding if you train it on your own code base. Not great for writing very complex code since the models tend to hallucinate, but it’s great for common patterns, and straightforward questions specific to your code base that can be answered based on existing code (eg “how do I load a user’s most recent order given their email address?”)

        • brbposting@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          It’s wild when you only know how to use SELECT in SQL, but after a dollar worth of prompting and 10 minutes of your time, you can have a significantly complex query you end up using multiple times a week.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that’s their opinion, I feel people should listen…

      There’s little in the AI business plan other than hurling money at it and hoping job losses ensue.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        TSMC doesn’t really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.

        Altman’s scheme is just a whole other level of crazy though.

    • tacosanonymous@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Agreed that’s why it’s so dangerous. These tech bros are going to do damage with their shitty products. It seems like it’s Altman’s goal, honestly.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Ya, it’s like machine learning but better. That’s about it IMO.

      Edit: As I have to spell it out: as opposed to (machine learning with) neural networks.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          It’s also neural networks, and probably some other CS structures.

          AI is a category, and even specific implementations tend to use multiple techniques.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Well there is a very specific architecture “rut” the LLMs people use have fallen into, and even small attempts to break out (like with Jamba) don’t seem to get much interest, unfortunately.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              Sure, but LLMs aren’t the only AI being used, nor will they eliminate the other forms of AI. As people see issues with the big LLMs, development focus will change to adopt other approaches.

              • commandar@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                6 months ago

                There is real risk that the hype cycle around LLMs will smother other research in the cradle when the bubble pops.

                The hyperscalers are dumping tens of billions of dollars into infrastructure investment every single quarter right now on the promise of LLMs. If LLMs don’t turn into something with a tangible ROI, the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

                Viable paths of research will become much harder to fund if investors get burned because the business model they’re funding right now doesn’t solidify beyond “trust us bro.”

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 months ago

                  Sure, but those are largely the big tech companies you’re talking about, and research tends to come from universities and private orgs. That funding hasn’t stopped, it just doesn’t get the headlines like massive investments into LLMs currently do. The market goes in cycles, and once it finds something new and promising, it’ll dump money into it until the next hot thing comes along.

                  There will be massive market consequences if AI fails to deliver on its promises (and I think it will, because the promises are ridiculous), and we get those every so often. If we look back about 25 years, we saw the same thing w/ the dotcom craze, where anything with a website got obscene amounts of funding, even if they didn’t have a viable business model, and we had a massive crash. But important websites survived that bubble bursting, and the market recovered pretty quickly and within a decade we had yet another massive market correction due to another bubble (the housing market, mostly due to corruption in the financial sector).

                  That’s how the market goes. I think AI will crash, and I think it’ll likely crash in the next 5 years or so, but the underlying technologies will absolutely be a core part of our day-to-day life in the same way the Internet is after the dotcom burst. It’ll also look quite a bit different IMO than what we’re seeing today, and within 10 years of that crash, we’ll likely be beyond where we were just before the crash, at least in terms of overall market capitalization.

                  It’s a messy cycle, but it seems to work pretty well in aggregate.

                • brucethemoose@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  6 months ago

                  the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

                  Well you say that, but somehow crypto is still around despite most schemes being (IMO) a much more explicit scam. We have politicans supporting it.

    • WoodScientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I think we should indict Sam Altman on two sets of charges:

      1. A set of securities fraud charges.

      2. 8 billion counts of criminal reckless endangerment.

      He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

      So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

    • falkerie71@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      For real. Being a software engineer with basic knowledge in ML, I’m just sick of companies from every industry being so desperate to cling onto the hype train they’re willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.

      • Badland9085@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.

      • Mikelius@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          6 months ago

          They’re all built on top of OpenAI which is very unprofitable at the moment. Feels like the whole industry is built on a shaky foundation.

          Putting the entire fate of your company in a different company (OpenAI) is not a great business move. I guess the successful AI startups will eventually transition to self-hosted models like Llama, if they survive that long.

          • Zos_Kia@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Most projects I’ve been in contact with are very aware of that fact. That’s why telemetry is so big right now. Everybody is building datasets in the hopes of fine tuning smaller, cheaper models once they have enough good quality data.

            • xavier666@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              My company is realizing that hosting a model which will be private, cost-effective, and performing better than traditional algorithms is like finding a unicorn. Few months back, the top execs were jumping around GenAI like a bunch of kids. Fortunately, the Sr. research head beat some sense into them.

              • Zos_Kia@lemmynsfw.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                What kind of use-cases was it, where you didn’t find suitable local models to work with ? I’ve found that general “chatbot” things are hit and miss but more domain-constrained tasks (such as extracting structured entities from unstructured text) are pretty reliable even on smaller models. I’m not counting my chickens yet as my dataset is still somewhat small but preliminary testing has been very promising in that regard.

                • xavier666@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 months ago

                  What kind of use-cases was it, where you didn’t find suitable local models to work with ?

                  Any time you ask very domain specific questions; eg “i have collected some soil samples from the mesolithic age near the Amazon basin which have high sulfur and phosphorus content compared to my other samples. What factors could contribute to this distribution?”, both of-the-shelf local models & OpenAI fail.

                  The main reason is because these models are not trained on highly-specialized domains of text. Sometimes the models start hallucinating and which reduces our trust upon them.

              • falkerie71@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                You’re lucky there’s a higher up that could talk down the even higher ups. Though, sometimes it’s not even about the r&d teams.

                I saw company wide HR educational emails or courses telling you how to improve you work quality/efficiency, and one of them tells us to “research AI” and learn how to utilize it, talking about how great it is and improved the work efficiency by 30%. Sure, it has its uses, but I won’t go touting how great it is. And with how ChatGPT works, you have to be the biggest idiot in the world to upload all your sensitive stuff to ChatGPT just for it to make a spreadsheet faster. But without these disclaimers in the email, I doubt regular clerical staff knows about this, and it’s extremely dangerous.

  • narc0tic_bird@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Sounds about right. There are some valid and good use cases for “AI”, but the majority is just buzzword marketing.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    And then people will complain about that saying it’s almost all hype and no substance.

    Then that one tech bro will keep insisting that lemmy is being unfair to AI and there are so many good use cases.

    No one is denying the 10% use cases, we just don’t think it’s special or needs extra attention since those use cases already had other possible algorithmic solutions.

    Tech bros need to realize, even if there are some use cases for AI, there has not been any revolution, stop trying to make it happen and enjoy your new slightly better tool in silence.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.

      I am 100% with Linus AND would say the 10% good use cases can be transformative.

      Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).

      I gotta say though this wave of AI tech feels different. It reminds me of the early days of the web/computing in the late 90s early 2000s. Where it’s fun, exciting, and people are doing all sorts of weird,quirky shit with it, and it’s not even close to perfect. It breaks a lot and has limitations but their is something there. There is a lot of promise.

      Like I said else where, it ain’t replacing humans any time soon, we won’t have AGI for decades, and it’s not solving world hunger. That’s all hype bro bullshit. But there is actual value here.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.

        Omg you found me in another post. I’m not even mad; I do like how passionate you are about things.

        Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).

        What you’re talking about is polarization and yeah, it’s a big issue.

        This is a good example, I never did any strawman nor disagree with the fact that it can be useful in some shape or form. I was trying to say its value is much much lower than what people claim to be.

        But that’s the issue with polarization, me saying there is much less value can be interpreted as absolute zero, and I apologize for contributing to the polarization.

  • pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I’m waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I’m sitting it out like Linus.

    • Z3k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      This is where I’m at. The push right now has nft pump and dump energy.

      The moment someone says ai to me right now I auto disengage. When the dust settles, I’ll look at it seriously.

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      AI has been used for these things for decades, they are just in the background and not noticed by laypeople

      Though the biggest issue is that when people say “AI” today, they mean specifically LLMs, but the world of AI is so much larger than that

    • Kusimulkku@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I’m waiting for the part that it gets used for things that are not lazy

      Replacing menial or boring tasks is like 90% of what I’m hoping from it.

  • Zip2@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Oh please. Wait until they release double-sided, double-density 128bit AI quantum blockchain that runs on premises/in the cloud edge hybrid.

  • Noxy@yiffit.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    game devs gonna have to use different language to describe what used to be simply called “enemy AI” where exactly zero machine learning is involved

  • NABDad@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I had a professor in college that said when an AI problem is solved, it is no longer AI.

    Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.

    I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.

    • designatedhacker@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.

      Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.

      There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.

      • NABDad@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I’ve often thought LLMs could replace all of the C-suites and upper and middle management.

        Funny how no companies push that as a possibility.

        • Zink@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I almost expect that we’ll see some company reveal it has been letting an AI control the top level decision making for the business itself, including if and when to reveal the AI.

          But the funny thing will be that all the executives and board members still have jobs and huge stock awards. They will all pat each other on the back for getting paid more money to do less work, by being bold and taking a risk to let the computer do half their job for them.

  • Tux@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Yeah, he’s right. AI is mostly used by corps to enshittificate their products for just extra profit