The Picard Maneuver@lemmy.worldM to Lemmy Shitpost@lemmy.world · 2 months agoDoesn't look like anything to me.lemmy.worldimagemessage-square44linkfedilinkarrow-up11arrow-down10
arrow-up11arrow-down1imageDoesn't look like anything to me.lemmy.worldThe Picard Maneuver@lemmy.worldM to Lemmy Shitpost@lemmy.world · 2 months agomessage-square44linkfedilink
minus-squareLarmyOfLone@lemm.eelinkfedilinkarrow-up0·2 months agoAnd if it could distinguish better, it could also generate better.
minus-squareNatanael@infosec.publinkfedilinkarrow-up0·2 months agoNot necessarily, but errors would be less obvious or weirder since it would spend more time in training
minus-squareLarmyOfLone@lemm.eelinkfedilinkarrow-up0·2 months agoWeirder? Interesting, like how for example?
minus-squareNatanael@infosec.publinkfedilinkarrow-up0·2 months agoWeirder in that it gets better at “photorealism” (textures, etc) but subjects might be nonsensical. Only teaching it how to avoid automated detection will not teach it to understand what scenes mean.
And if it could distinguish better, it could also generate better.
Not necessarily, but errors would be less obvious or weirder since it would spend more time in training
Weirder? Interesting, like how for example?
Weirder in that it gets better at “photorealism” (textures, etc) but subjects might be nonsensical. Only teaching it how to avoid automated detection will not teach it to understand what scenes mean.