I have mixed feelings about that company. They have some interesting things “going against the flow” like ditching the cloud and going back to on prem, hating on microservices, advocating against taking money from VCs, and now hiring juniors. On the other hand, the guy is a Musk fanboy and they push some anti-DEI bullshit. Also he’s a TypeScript hater for some reason…
Most smart AI “developer”
We’re still far away from Al replacing programmers. Replacing other industries, sure.
Right, it’s the others that are cooked.
Fake review writers are hopefully retraining for in-person scams.
Know a guy who tried to use AI to vibe code a simple web server. He wasn’t a programmer and kept insisting to me that programmers were done for.
After weeks of trying to get the thing to work, he had nothing. He showed me the code, and it was the worst I’ve ever seen. Dozens of empty files where the AI had apparently added and then deleted the same code. Also some utter garbage code. Tons of functions copied and pasted instead of being defined once.
I then showed him a web app I had made in that same amount of time. It worked perfectly. Never heard anything more about AI from him.
“no dude he just wasn’t using [ai product] dude I use that and then send it to [another ai product]'s [buzzword like ‘pipeline’] you have to try those out dude”
I understand the motivated reasoning of upper management thinking programmers are done for. I understand the reasoning of other people far less. Do they see programmers as one of the few professions where you can afford a house and save money, and instead of looking for ways to make that happen for everyone, decide that programmers need to be taken down a notch?
I’m an engineer and can vibe code some features, but you still have to know wtf the program is doing over all. AI makes good programmers faster, it doesn’t make ignorant people know how to code.
AI is very very neat but like it has clear obvious limitations. I’m not a programmer and I could tell you tons of ways I tripped Ollama up already.
But it’s a tool, and the people who can use it properly will succeed.
This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.
If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.
I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn’t big enough to understand an entire project.
That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.
It’s great for verbose log statements
I’ve used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.
I most often just get it straight up misunderstanding how the test framework itself works, but I’ve definitely had it make strange decisions like that. I’m a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.
Oh, I am right there with you. I don’t want to write tests because they’re tedious, so I backfill with the AI at least starting me off on it. It’s a lot easier for me to fix something (even if it turns into a complete rewrite) than to start from a blank file.
Isn’t writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you won’t make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), there’s a risk that it’ll just write a test case that makes sure the code contains the wrong behavior.
Okay, it might still be okay for regression testing, but you’re still missing most of the benefit you’d get by writing the tests manually. Unless you only care about closing tickets, that is.
“Unless you only care about closing tickets, that is.”
Perfect. I’ll use it for tests at work then.
I’ve used it most extensively for non-professional projects, where if I wasn’t using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the “hand holding” I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.
I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it’s validity and expanding from there.
Funny. Every time someone points out how god awful AI is, someone else comes along to say “It’s just a tool, and it’s good if someone can use it properly.” But nobody who uses it treats it like “just a tool.” They think it’s a workman they can claim the credit for, as if a hammer could replace the carpenter.
Plus, the only people good enough to fix the problems caused by this “tool” don’t need to use it in the first place.
But nobody who uses it treats it like “just a tool.”
I do. I use it to tighten up some lazy code that I wrote, or to help me figure out a potential flaw in my logic, or to suggest a “better” way to do something if I’m not happy with what I originally wrote.
It’s always small snippets of code and I don’t always accept the answer. In fact, I’d say less than 50% of the time I get a result I can use as-is, but I will say that most of the time it gives me an idea or puts me on the right track.
I take issue with the “replacing other industries” part.
I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.
Generative AI is an incremental improvement in automation. In my industry it might make someone 10% more productive. For any role where it could make someone 20% more productive that role could have been made more efficient in some other way, be it training, templates, simple conversion scripts, whatever.
Basically, if someone’s job can be replaced by AI then they weren’t really producing any value in the first place.
Of course, this means that in a firm with 100 staff, you could get the same output with 91 staff plus Gen AI. So yeah in that context 9 people might be replaced by AI, but that doesn’t tend to be how things go in practice.
I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.
I am kind of surprised that is an unpopular opinion. I figure there is a reason we compensate people for jobs. Pay people to do stuff you cannot, or do not have the time to do, yourself. And for almost every job there is probably something that is way harder than it looks from the outside. I am not the most worldly of people but I’ve figured that out by just trying different skills and existing.
Programmers like to think that programming is a special profession which only super smart people can do. There’s a reluctance to admit that there are smart people in other professions.
There are around 50 models listed as supported for function calling in llama.cpp. There are a half dozen or so different APIs. How many people have tried even a few of these. There is even a single model with its own API supported in llama.cpp function calling. The Qwen VL models look very interesting if the supported image recognition setup is built.
I’m not really clear what you’re getting at.
Are you suggesting that the commonly used models might only be an incremental improvement but some of the less common models are ready to take accountant’s and lawyer’s and engineer’s and architect’s jobs ?
$145,849 is very specific salary. Is it a numerology or math puzzle?
Relevant xkcd: https://xkcd.com/2597/
Probably just what their hiring algorithm spat out, or a market average, or something.
Yeah DHH is a problematic person to root for.
THAT is the message you took from all this? What you’re going to root for the smug ignorant asshole?
It’s even funnier because the guy is mocking DHH. You know, the creator of Ruby on Rails. Which 37signals obviously uses.
I know from experience that a) Rails is a very junior developer friendly framework, yet incredibly powerful, and b) all Rails apps are colossal machines with a lot of moving parts. So when the scared juniors look at the apps for the first time, the senior Rails devs are like “Eh, don’t worry about it, most of the complex stuff is happening on the background, the only way to break it if you genuinely have no idea what you’re doing and screw things up on purpose.” Which leads to point c) using AI coding with Rails codebases is usually like pulling open the side door of this gargantuan machine and dropping in a sack of wrenches in the gears.
As an end user with little knowledge about programming, I’ve seen how hard it is for programmers to get things working well many times over the years. AI as a time saver for certain simple tasks, sure, but no way in hell they’ll be replacing humans in my lifetime.
A person who hasn’t debugged any code thinks programmers are done for because of “AI”.
Oh no. Anyways.
You can say “fucked” on the internet, Ace Rbk.
It’s better to future-proof your account for when Gilead is claimed.
Oh no, he’s a cannibal.
Hey cool, an AI can program itself as well as a human can now. Think of how this will impact the programmer job market! That’s got to be like, the biggest implication of this development.
The day that AI can program perfectly is the day it can improve the itself perfectly and it’s the day that we’ll all be fucked.
I personally vote for some sort of direct brain interface (no Elmo, you’re not allowed to play) that DOES allow direct recall of queries but does NOT allow ads ffs) that allows us to grow with AI in intelligence. If you can’t beat em (we can’t), join em.
I highly doubt some of these rich fucks would pass up an opportunity to put ads straight into people’s brains.
AI is certainly a very handy tool and has helped me out a lot but anybody who thinks “vibe programming” (i.e. programming from ignorance) is a good idea or will save money is woefully misinformed. Hire good programmers, let them use AI if they like, but trust the programmer’s judgement over some AI.
That’s because you NEED that experience to notice the AI is outputting garbage. Otherwise it looks superficially okay but the code is terrible, or fragile, or not even doing what you asked it properly. e.g. if I asked Gemini to generate a web server with Jetty it might output something correct or an unholy mess of Jetty 8, 9, 10, 11, 12 with annotations and/or programmatic styles, or the correct / incorrect pom dependencies.
AI is great for learning a language, partly because it’s the right combination of useful and stupid.
It’s familiar with the language in a way that would take some serious time to attain, but it also hallucinates things that don’t exist and its solution to debugging something often ends up being literally just changing variable names or doing the same wrong things in different ways. But seeing what works and what doesn’t and catching it when it’s spiraling is a pretty good learning experience. You can get a project rolling while you’re learning how to implement what you want to do without spending weeks or months wondering how. It’s great for filling gaps and giving enough context to start understanding how a language works by sheer immersion, especially if the application of that language comes robust debugging built in.
I’ve been using it to help me learn and implement GDscript while I’m working on my game and it’s been incredibly helpful. Stuff that would have taken weeks of wading through YouTube tutorials and banging my head against complex concepts and math that I just don’t have I can instead work my way through in days or even hours.
Gradually I’m getting more and more familiar with how the language works by doing the thing, and when it screws up and doesn’t know what it’s talking about I can see that in Godot’s debugging and in the actual execution of the code in-game. For a solo indie dev who’s doing all the art, writing, and music myself, having a tool to help me move my codebase forward while I learn has been pretty great. It also means that I can put systems in place that are relevant to the project so my modding partner who doesn’t know GDScript yet has something relevant to look at and learn from by looking through the project’s git.
But if I knew nothing about programming? If I wasn’t learning enough to fix its mistakes and sometimes abandon it entirely to find solutions to things it can’t figure out? I’d be making no progress or completely changing the scope of the game to make it a cookie cutter copy of the tutorials the AI is trained on.
Vibe coding is complete nonsense. You still need a competent designer who’s at least in the process of learning the context of the language they’re working with or your output is going to be complete garbage. And if you’re working in a medium that doesn’t have robust built-in debugging? Good luck even identifying what it’s doing wrong if you’re not familiar with the language yourself. Hell, good luck getting it to make anything complex if you have multiple systems to consider and can’t bridge the gaps yourself.
Corpo idiots going all in on “vibe coding” are literally just going to do indies a favor by churning out unworkable garbage that anyone who puts the effort in will be able to easily shine in comparison to.
It’s a good teacher, though, and a decent assistant.
Everyone’s convinced their thing is special, but everyone else’s is a done deal.
Meanwhile the only task where current AI seems truly competitive is porn.
Everyone’s convinced their thing is special, but everyone else’s is a done deal.
I’m sad it makes me sound like such a pie-in-the-sky hippie when I say I think everyone’s contributions are not just special, but essential, and that’s why this whole mentality pisses me off so much, especially in the indie space.
- Artists are like “Finally, I can have Ai do my code! But good art takes a special touch.”
- Coders are like “Finally, I can have Ai do my art! But good code takes a special touch.”
- “Idea Guys”, who never learned anything but want to make a game because they like playing them, are leading the charge. They’re so excited to put everyone who makes those games out of a job because they think they’ll finally get to “achieve their dreams” with freaking prompts.
But for the people who do the work, why the heck are skilled artisans so ready to sell out their comrades? This “highly competitive” nonsense, and one-great-glorious-man myth has simply turned us on each other, when the people with pointless bullshit jobs are somehow still employed, simply serving to harass and bother the people getting things done.
Meanwhile the only task where current AI seems truly competitive is porn.
Well it sure has a heckuva data set from every possible angle and lighting setup, doesn’t it? 😬 Lol
I’d suggest that if you think AI porn is anywhere near the real thing, that’s probably because you think porn is already slop in the same way that these AI bros think of code or creative writing or whatever other information-based thing you already know AI can’t do well.
Porn isn’t slop, people aren’t just interestingly-shaped slabs of meat. Sex is fundamentally about interpersonal connection. It might be one of the things that LLMs and robots are the worst at.
Most porn is definitely slop.
Most commercially produced media is slop. Porn isn’t special in that regard.
That doesn’t mean porn is somehow specially devoid of artistic merit. Done well it can be beautiful and meaningful.
You’ve got a stereotype in your head that was put there by a misogynistic culture, but that’s not inherent to the genre.
Not everyone is there for the interpersonal connection. Some really are just that base and pathetic.
Having said that, seeking personal connection (or just sex) is a mistake in this age. Best to learn to let go, and get used to suffering.
Wanking is about the emotional connection to a JPG, said someone I deeply pity.
Who was that? I said sex is about interpersonal connection. I didn’t learn that from porn, I learned it from sex.
I trusted the audience to understand that good porn or erotica in general should be about portraying that connection in some form, which is what is actually hot about sex, but maybe I gave you too much credit.
But hey, if sexuality to you is really that shallow, you’re free to pity me, because I put absolutely no stock in your opinion.
Who wouldn’t pity those who make do with a lossy compression image format?
I do not need lossless copies of an image someone didn’t even draw.
They’re about 2% better at being a telephone IVR than the older ones, probably at 6x the power cost.
AI is really good at creating images of Jesus that boomers say “amen” to.
So is toast.
False. Porn is sexy, and I can’t possibly be aroused by an image of a woman spreading her cheeks when her fingers are attached to her arse with a continuous piece of flesh, giving her skin the same topography as a teapot.
I absolutely hate this, thanks
Damning comments from 2023.
I’ll stop saying it if it stops being true.