simple@lemm.ee to Technology@lemmy.worldEnglish · 2 months agoFOSS infrastructure is under attack by AI companiesthelibre.newsexternal-linkmessage-square24linkfedilinkarrow-up10arrow-down10cross-posted to: technology@beehaw.orgtechnology@lemmy.worldopensource@lemmy.mltechnology@lemmy.world
arrow-up10arrow-down1external-linkFOSS infrastructure is under attack by AI companiesthelibre.newssimple@lemm.ee to Technology@lemmy.worldEnglish · 2 months agomessage-square24linkfedilinkcross-posted to: technology@beehaw.orgtechnology@lemmy.worldopensource@lemmy.mltechnology@lemmy.world
minus-squareHubertManne@piefed.sociallinkfedilinkEnglisharrow-up0·2 months agoAny idea what the point of these are then? Sounds like its reporting a fake bug.
minus-squarewjs018@piefed.sociallinkfedilinkEnglisharrow-up0·2 months agoThe theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning: Get your LLM to create what it thinks are valid bug reports/issues Monitor the outcome of those issues (closed immediately, discussion, eventual pull request) Use those outcomes to assign how “good” or “bad” that generated issue was Use that scoring as a way to feed back into the model to influence it to create more “good” issues If this is what’s happening, then it’s essentially offloading your LLM’s reinforcement learning scoring to open source maintainers.
Any idea what the point of these are then? Sounds like its reporting a fake bug.
The theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning:
If this is what’s happening, then it’s essentially offloading your LLM’s reinforcement learning scoring to open source maintainers.