US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • Suite404@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It should. We should have radically different lives today because of technology. But greed keeps us in the shit.

  • sheetzoos@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    New technologies are not the issue. The problem is billionaires will fuck it up because they can’t control their insatiable fucking greed.

    • ☂️-@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      exactly. we could very well work less hours with the same pay. we wouldnt be as depressed and angry as we are right now.

      we just have to overthrow, what, like 2000 people in a given country?

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.

    Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.

    Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.

    • mctoasterson@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.

      AI isn’t good at doing a lot of other things software engineers actually do. It isn’t very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I work in an environment where we’re dealing with high volumes of data, but not like a few meg each for millions of users. More like a few hundred TB fed into multiple pipelines for different kinds of analysis and reduction.

        There’s a shit-ton of prior art for how to scale up relatively simple web apps to support mass adoption. But there’s next to nothing about how do to what we do, because hardly anyone does. So look ma, no training set!

    • lobut@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      A complete random story but, I’m on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren’t tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you’re doing RAG stuff maybe or other things. However, I was “crafting” my prompt and I could not give a shit less about writing a perfect prompt. I’m typically used to coding what I want but I had to find out how to write it properly: “please don’t format it like X”. Like I wasn’t using AI to write code, it was a service endpoint.

      During lunch with the AI team, they keep saying things like “we only have 10 years left at most”. I was like, “but if you have AI spit out this code, if something goes wrong … don’t you need us to look into it?” they were like, “yeah but what if it can tell you exactly what the code is doing”. I’m like, “but who’s going to understand what it’s saying …?” “no, it can explain the type of problem to anyone”.

      I said, I feel like I’m talking to a libertarian right now. Every response seems to be some solution that doesn’t exist.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

      I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

      • Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.

        They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.

        Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.

        • ImmersiveMatthew@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Agreed. I would add that not only would job loss be enormous, but many corporations are suddenly going to be competing with individuals armed with the same AI.

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If it walks and quacks like a speculative bubble…

      I’m working in an organization that has been exploring LLMs for quite a while now, and at least on the surface, it looks like we might have some use cases where AI could prove useful. But so far, in terms of concrete results, we’ve gotten bupkis.

      And most firms I’ve encountered don’t even have potential uses, they’re just doing buzzword engineering. I’d say it’s more like the “put blockchain into everything” fad than like outsourcing, which was a bad idea for entirely different reasons.

      I’m not saying AI will never have uses. But as it’s currently implemented, I’ve seen no use of it that makes a compelling business case.

  • moonlight@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Depends on what we mean by “AI”.

    Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

    LLMs and the like? Yeah I’m not sure how positive these are. I don’t think they’ve actually been all that impactful so far.

    Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

    It could be a bridge to post-scarcity, but under capitalism it’s much more likely it will erode the working class further and exacerbate inequality.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

      Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.

      The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I considered this, and I think it depends mostly on ownership and means of production.

        Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.

        Basically, I don’t see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don’t think it will do much to get us there.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          or just propped up with something like UBI.

          That depends entirely on how much UBI is provided.

          I envision a “simple” taxation system with UBI + flat tax. You adjust the flat tax high enough to get the government services you need (infrastructure like roads, education, police/military, and UBI), and you adjust the UBI up enough to keep the wealthy from running away with the show.

          Marshall Brain envisioned an “open source” based property system that’s not far off from UBI: https://marshallbrain.com/manna

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          It would still require a revolution.

          I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.

        • FourWaveforms@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          It will only help us get there in the hands of individuals and collectives. It will not get us there, and will be used to the opposite effect, in the hands of the 1%.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Translations apps would be the main one for LLM tech, LLMs largely came out of google’s research into machine translation.

        • MonkderVierte@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          If that’s the case and LLM are scaled up translation models shoehorned into general use, it makes sense that they are so bad at everything else.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Every technology shift creates winners and losers.

      There’s already documented harm from algorithms making callous biased decisions that ruin people’s lives - an example is automated insurance claim rejections.

      We know that AI is going to bring algorithmic decisions into many new places where it can do harm. AI adoption is currently on track to get to those places well before the most important harm reduction solutions are mature.

      We should take care that we do not gaslight people who will be harmed by this trend, by telling them they are better off.

  • VirtualOdour@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    People aren’t very smart, have trouble understanding new things and fear change - of course they express negative options.

    Most Americans would have said the sama about electricity, computers, the internet, mobile phones…

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.

    • count_dongulus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Mayne pedantic, but:

      Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.

        The problems are systemic, not individual.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.

    • faltryka@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.

      • ferb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?

        • AceofSpades@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.

          Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Universal Base Income - it’s either that or just kill all the un-necessary poor people.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.

        For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.

        And if you run the AI locally, you can also be free of paying a subscription to a big AI company.

        • einkorn@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Except, no employer will allow you to use your own AI model. Just like you can’t bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            No big employer… there are plenty of smaller companies who are open to do whatever works.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.

      That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.

      The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.

      “Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.

    The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.

    That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.

  • turnip@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo

    Try this voice AI demo on your phone, then imagine if it can create images and video.

    This in my opinion changes every system of information gathering that we have, and will usher in an era of geniuses, who grew up with access to the answer to their every question in a granular pictorial video response. If you want to for example learn how white blood cells work it gives you ask your chatbot for a video, and you can then tell it to put in different types of bacteria to see the response. Its going to make a lot of systems we have now obsolete.

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      you can’t learn from chatbots though. how can you trust that the material is accurate? any time I’ve asked a chatbot about subject matter that I’m well versed in, they make massive mistakes.

      All you’re proving is “we can learn badly faster!” or worse, we can spread misinformation faster.

      • turnip@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Mistakes will be less in the future, and its already pretty good now for subjects with a lot of textbooks and research. I dont think this is that big of an impediment, it will still create geniuses all over the globe.

    • sgtgig@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Removing the need to do any research is just removing another exercise for the brain. Perfectly crafted AI educational videos might be closer to mental junk food than anything.

      • turnip@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It is mental junk food, its addictive, which is why I think it will be so effective. If you can make learning addictive then its bound to raise the average global IQ.

      • undrwater@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Same was said about calculators.

        I don’t disagree though. Calculators are pretty discrete and the functions well defined.

        Assuming AI can be trusted to be accurate at some point, your will reduce cognitive load that can be utilized for even higher thinking.

  • carrion0409@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Because it won’t. So far it’s only been used to replace people and cut costs. If it were used for what it was actually intended for then it’d be a different story.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Replacing people is a good thing. It means less people do more work. It means progress. It means products and services will get cheaper and more available. The fact that people are being replaced means that AI actually has tremendous value for our society.

      • stardust@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Great for people getting fired or finding that now the jobs they used to have that were middle class are now lower class pay or obsolete. They will be so delighted at the progress despite their salaries and employment benefits and opportunities falling.

        And it’s so nice that AI is most concentrated in the hands of billionaires who are oh so generous with improving living standards of the commoners. Wonderful.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          This is collateral damage of societal progress. This is a phenomenon as old as humanity. You can’t fight it. And it has brought us to where we are now. From cavemen to space explorers.

          • mriormro@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Oh hey, it’s the Nazi apologist. Big shock you don’t give a fuck about other people’s lives.

            • doodledup@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              You sound really stupid when calling me a Nazi under this comment.

              Almost every comment of yours is insulting in some way or the other. I’m starting to think you’re some kind of (Russian) troll and don’t care about contributing anything worthwhile to these threads.

          • stardust@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            Which are separate things from people’s ability to financially support themselves.

            People can have smartphones and tech the past didn’t have, but be increasingly worse off financially and unable to afford housing.

            And you aren’t a space explorer.

            I’m not arguing about whether innovation is cool. It is.

            I however strongly disagree with your claim that people being replaced is good. That assumes society is being guided with altruism as a cornerstone of motivation to create some Star Trek future to free up people to pursue their interests, but that’s a fantasy. Innovation is simply innovation. It’s not about whether people’s lives will be improved. It doesn’t care.

            World can be the most technologically advanced its ever been with space travel for the masses and still be a totalitarian dystopia. People could be poorer than ever and become corpo slaves, but it would fit under the defition of societal progress because of innovation.

            • doodledup@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              People can have smartphones and tech the past didn’t have, but be increasingly worse off financially and unable to afford housing.

              You really have no idea what life was like just two or three generations ago. At least you now have toilet paper, water, can shower, and don’t need to starve to death when the pig in your backyard dies of some illness. Life was FUCKING HARD man. Affording a house is your problem? Really?

              And you aren’t a space explorer.

              The smoke detector, the microwave and birth control pills were invented around the time when we landed on the moon.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              People being economically displaced from innovation increasing productivity is good provided it happens at a reasonable place and there is a sufficient social saftey net to get those people back on their feet. Unfortunately those saftey nets dont exist everywhere and have been under attack (in the west) for the past 40 years.

          • mriormro@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Whoever the mod was that decided to delete my comment is a fool. This guy above is a Nazi apologist.

            • doodledup@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              What makes you think that? You can’t just go around and insult people personally without elaborating on the reason.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Great for people getting fired or finding that now the jobs they used to have that were middle class are now lower class pay or obsolete. They will be so delighted at the progress despite their salaries and employment benefits and opportunities falling.

          This shouldn’t come as a surprise. Everyone who’s suprised by that is either not educated how economy works or how societal progress works. There are always winners and losers but society makes net-positive progress as a whole.

          I have no empathy for people losing their jobs. Even if I lose my job, I accept it. It’s just life. Humanity is a really big machine of many gears. Some gears replace others to make the machine run more efficient.

          And it’s so nice that AI is most concentrated in the hands of billionaires who are oh so generous with improving living standards of the commoners. Wonderful.

          This is just a sad excuse I’m hearing all the time. The moment society gets intense and chang is about to happen, a purpetrator needs to be found. But most people don’t realize that the people at the top change all the time when the economy changes. They die aswell. It’s a dynamic system. And there is no one purpetrator in a dynamic system. The only purpetrator is progress. And progress is like entropy. It always find its way and you cannot stop it. Those who attempt to stop it instead of adapting to it will be crushed.

      • OnASnowyEvening@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I trust you’ve volunteered for it to replace you then. It being so beneficial to society, and all.

        It means less people do more work.

        And then those people no longer working… do what, exactly? Fewer well-paying jobs, same number of people, increasing costs. Math not working out here.

        The fact that people are being replaced means that AI actually has tremendous value for our society.

        Oh, it has value. Just not for society (it could that’s the sad part). For very specific people though, yeah, value. Just got to step on all the little people along the way, like we’ve always done, eh?

        Yeah, rather than volunteering its more likely you lack a basic characteristic of humanity some of like to refer to as “empathy” instead. And if – giving you the benefit of the doubt – you’re just a troll… well, my statement stands.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          I trust you’ve volunteered for it to replace you then. It being so beneficial to society, and all.

          Yes. If I get replaced by something more efficient I accept that. I am no longer worth the position of my job. I will look for something else and try to find ways to apply some of my skillsets in other ways. I may do some further training and education, or just accept a lower paying job if that’s not possible.

          And then those people no longer working… do what, exactly? Fewer well-paying jobs, same number of people, increasing costs. Math not working out here.

          Can you elaborate? I don’t quiet understand what you mean by that. The people who no longer work need to find something else. There will remain only a fraction that can never find another job again. And that fraction is offset by the increased productivity of society.

          Oh, it has value. Just not for society (it could that’s the sad part). For very specific people though, yeah, value. Just got to step on all the little people along the way, like we’ve always done, eh?

          Can you specify “specific”? What little people? If you use very vague terminology like that you should back it up with some arguments. I personally see no reason why AI would disadvantage working people any more than the sewing machine did back in the day. Besides, when you think about it you’ll find that defining the terms you used is actually quiet difficult in a rapidly changing economy when you don’t know to whom these terms might apply to in the end.

          I have a feeling you’re not actually thinking this through, or at least doing it on a very emotional level. This will not help you adapt to the changing world. The very opposite actually.

  • PunkRockSportsFan@fanaticus.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The amount of failed efforts the ruling class has made to corner ai shows me that it is a democratizing force.

    I reap benefits from it already.

    I can create local models with zero involvement from billionaires.

    It scares them more than us.

    And it should. It shows how evil they are. It’s objectively true. Ai knows it.

    • SeeMarkFly@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      There is a BIG difference between what you can do and what you should do.

      We have ZERO understanding on the long term effects this new technology will have on our civilization.

      Why is everybody so eager to go “all in”?

      • PunkRockSportsFan@fanaticus.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        We have zero understanding of the long term effect of any new tech on our civilization

        But we know those who adopt early and gain mastery quickly are set up better for success in the future.

        Every time.

        • SeeMarkFly@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Not understanding sparrows role in the game caused by the Four Pests campaign. MILLIONS DEAD.

          The ecological repercussions translated into a humanitarian crisis of unprecedented proportions. The absence of sparrows, which traditionally kept locust populations in check, allowed swarms to ravage fields of grain and rice. The resulting agricultural failures, compounded by misguided policies of the Great Leap Forward, triggered a severe famine from 1958 to 1962. The death toll from starvation during this period reached 20 to 30 million people.

          https://en.wikipedia.org/wiki/Four_Pests_campaign#Consequences

              • PunkRockSportsFan@fanaticus.social
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 month ago

                You should check out twitter. It’s like the place for insecure people to impotently rage that someone said some words that weren’t good enough.

                Best wishes I guess. You seem to need them.

                • SeeMarkFly@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 month ago

                  You seem to need to tell people what they didn’t say and tell them where you want them to go.

                  So… Could be a Republican, or a troll farm, or an A.I. extension for Chrome, or a bot, or my Ex (we still don’t get along).

                  Say another stupid thing so I can figure it out.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            We also didnt understand how the internet would change the world, still went ahead with it. We didnt understand how computers would change the world, still went ahead with it, we didnt understand how the steam engine would change the world… etc etc.

            No one can know how a new invention will change things, but you are not going to be able to crush human’s innate creativity and drive to try new things. Sometimes those things are going to be a net negative and that’s bad, but the alternative is to insist nothing new is tried and thats A bad and B not possible.

        • SeeMarkFly@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Sort of like what the tobacco industry did? Hide the truth under corporate profits?

    • nadram@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      But you’re using these billionaires’ ai models are you not? Even if you use the free models they still benefit from your profile and query data

        • einkorn@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Uhm, I guess you missed the news when it was revealed that Deepseek had a little more backing than they claimed.

        • mesamunefire@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Yep you can run models without giving $$ to tech billionaires!

          Now we are giving it to the power billionaires! unless you own your own power sources.

            • mesamunefire@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              Meh I like some of the others on hugging face a bit more for coding and such. But its all the same at the end of the day. I do like what you are saying though!

              Models + moderate power should be what we strive for. I’m hoping for a star trek ending where we live in a post scarcity world. Im planing on a post apocalypse haha.

              Once ASIC chips come out (essentially a specific model on a chip) the amount of power we use will be dramatically less.

                • mesamunefire@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 month ago

                  Its an interesting field! I think the reason we have not gone there is the LLM specific models all have very different models/languages/etc… right now. So the algorithms that create them and use them need flexibility. GPUs are very flexible with what they can do with tier multiprocessing.

                  But in 5 years (or less) time, I can see a black box kinda system that can run 1000x+ speed that will make GPU LLMs obsolete. All the new GPU farm places that are popping up will have a rude awakening lol.